Starkey Research & Clinical Blog

Cochlear Dead Regions and High Frequency Gain: How to Fit the Hearing Aid

Cox, R.M., Johnson, J.A. & Alexander, G.C. (2012). Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss. Ear and Hearing, May 2012, e-published ahead of print.

This editorial discusses the clinical implications of an independent research study. This editorial does not represent the opinions of the original authors.

Cochlear dead regions (DRs) are defined as a total loss of inner hair cell function across a limited region of the basilar membrane (Moore, et al., 1999b). This does not result in an inability to perceive sound at a given frequency range, rather the sound is perceived via a spread of excitation to adjacent regions in the cochlea where the inner hair cells are still functioning. Because the response is spread out over a broader tonotopic region, patients with cochlear dead regions may perceive some high frequency pure tones as “clicks”, “buzzes” or “whooshes” rather than tones.

Dead regions can be present at moderate hearing thresholds (e.g. 60dBHL) and are more likely to be present as the degree of loss increases. Psychophysical tuning curves are the preferred method for identifying cochlear dead regions in the laboratory, complicated and time consuming. Moore and his colleagues developed the Threshold Equalizing Noise (TEN) Test as a clinical means of identifying dead regions (Moore et al., 2000; Moore et al., 2004). The TEN procedure looks for shifts in masked thresholds beyond what would typically be expected for a given hearing loss.  A threshold obtained with TEN masking noise that shifts at least 10dB indicates the likely presence of a cochlear dead region.

Historically, there has been a lack of consensus among clinical investigators as to whether or not high frequency gain is beneficial for hearing aid users who have cochlear dead regions. Some studies suggest that high frequency gain could have deleterious effects on speech perception and should be limited for individuals with cochlear dead regions (Moore, 2001b; Turner, 1999; Padilha et al., 2007). For example, Vickers et al. (2001) and Baer et al. (2002) studied the benefit of high frequency amplification in quiet and noise for individuals with and without DRs. Both studies reported that individuals with DRs were unable to benefit from high frequency amplification. While Gordo & Iorio (2007) found that hearing aid users with DRs performed worse with high-frequency amplification than they did without it.

In contrast, Cox and her colleagues (2011) found beneficial effects of high frequency audibility whether or not the participants had dead regions. Others have reported equivalent performance for participants with and without dead regions for quiet and low noise conditions; however, in high noise conditions the individuals without dead regions demonstrated further improvement when additional high frequency amplification was provided, whereas participants with dead regions did not (Mackersie et al., 2004). This article is summarized in more detail here: http://blog.starkeypro.com/bid/66368/Recommendations-for-fitting-patients-with-cochlear-dead-regions

The current study was undertaken to examine the subjective and objective effect of high frequency amplification on matched pairs of participants (with and without DRs) in a variety of conditions. Participants were fitted with hearing aids that had two programs: the first (NAL) was based on the NAL-NL1 formula and the second (LP) was identical to the NAL-NL1 program below 1000Hz, with amplification rolled off above 1000Hz.  The goals of the study were to compare performance with these two programs, for individuals with and without dead regions. The following measures were conducted:

1) Speech discrimination in quiet laboratory conditions

2) Speech discrimination in noisy laboratory conditions

3) Subjective performance in everyday situations

4) Subjective preference for everyday situations

Participants were recruited from a pool of individuals who had previously been identified as typical hearing aid patients (Cox et. al., 2011). Participants had bilateral flat or sloping sensorineural hearing loss with thresholds above 25dB below 1kHz and thresholds of 60 to 90dB HL for at least part of the frequency range of 1-3kHz.

The TEN test (Moore et al., 2004) was administered to determine the presence of DRs. To be eligible for the study, participants needed to have one or more DRs in the better ear at or above 1kHz and no DRs below 1kHz. Participants were then divided into to two groups: the experimental group with DRs and the control group without DRs.  Individuals in the experimental group showed a diverse range of DR distribution across frequency. Almost half of the participants had DRs between 1-2kHz, whereas the remainder had DRs only at or above 3kHz. A little more than half of the participants had one DR only, whereas the others had more than one DR.

Individuals in the experimental group were matched in pairs with individuals from the control group. In total, there were 18 participant pairs; each matched for age, degree of hearing loss and slope of hearing loss. There were 24 men and 12 women. No attempt was made to match pairs based on gender.

Participants were fitted monaurally with behind-the-ear hearing aids coupled to vented skeleton earmolds. The monaural fitting was chosen to avoid complications when participants switched between the NAL and LP programs. Data collection was completed before the widespread availability of wireless hearing aids, so the participants would have had to reliably switch both hearing aids individually to the proper program every time to avoid making occasional subjective judgments based on mismatched programs.

The hearing aids had two programs: a program based on the NAL-NL1 prescription (NAL) and a program with high-frequency roll-off (LP). Participants were able to switch the programs themselves but could not identify the programs as NAL or LP. Half of the participants had NAL in P1 and LP in P2, whereas the other half had LP in P1 and NAL in P2.  Verification measures were conducted to ensure that the two programs matched below 1kHz and to make sure the participants judged the programs to be equally loud.

After a two week acclimatization period, participants returned for speech recognition testing and field trial training. Speech and noise stimuli were presented in a sound field with the unaided ear was plugged during testing. Speech recognition in quiet was evaluated using the Computer Assisted Speech Perception Assessment (CASPA; Mackersie, et al., 2001).  The CASPA test includes lists of 10 consonant-vowel-consonant words spoken by a female. Five lists were presented for each of the NAL and LP programs. Stimuli were presented at 65dB SPL.

Speech recognition in noise was evaluated with the Bamford-Kowell-Bench Speech in Noise (BKB-SIN test,  Etymotic Research, 2005), which contains sentences spoken by a male talker, masked by 4-talker babble. The test contains lists of 10 sentences with 31 scoring words. In each list, the signal-to-noise ratio (SNR) decreases by 3dB with each sentence, so that within any list the SNR ranges from +21dB to -6dB. Sentences were presented at 73dB, a “loud but OK” level, as recommended for this test.

Following the speech recognition testing, participants were trained in the field trial procedures for subjective ratings. They were asked to evaluate their ability to understand speech in everyday situations with the NAL and LP programs and identify occasions during which they felt they could understand some but not all of the words they were hearing. Participants were given booklets with daily rating sheets and listening checklists to record daily hours of hearing aid use and track the variety of their daily listening experiences.

After a two week field trial, participants returned to the laboratory for a second session of CASPA and BKB-SIN testing. They submitted ratings sheets and listening checklists and were interviewed about their preferred hearing aid program for everyday listening. The interview consisted of questions that covered program preferences related to:  understanding speech in quiet, understanding speech in noise, hearing over long distances, the sound of their own voice, sound quality, loudness, localization, the least tiring program and the one that provided the most comfortable sound. Participants were asked to indicate their preferred program for each of these criteria, as well as their preferred program for overall everyday use. They were asked to provide three reasons for overall preference.

Speech recognition testing in quiet revealed no difference in overall performance between the two groups, but there was a significant difference based on the hearing aid program that was used. Listeners from both the experimental group and the control group performed better with the broadband NAL program, though the difference between the NAL and LP programs was larger for the control group than the experimental group. This indicates that the individuals without DRs were able to derive more benefit from the additional high frequency information in the NAL program than the individuals with DRs did.

Speech recognition testing in noise revealed a similar finding but in this case the improvement with the NAL program was only significant for the control group. Although the experimental group’s mean scores with the NAL program were higher than those with the LP program, the difference did not reach statistical significance.  Because the BKB-SIN test used variable SNR levels, performance-intensity functions were constructed with scores obtained using the NAL and LP programs. These functions revealed that at any given SNR, speech was more intelligible with the NAL program. However, there was more of a difference between the NAL and LP functions for the control group than the experimental group, consistent with a program by group statistical interaction.

Subjective ratings of speech understanding revealed no significant difference between the experimental and control groups, but there was a significant difference based on program.  Participants from the control and experimental groups rated their performance better with the NAL program.

Interviews concerning program preference revealed that 23 participants preferred the NAL program and 11 preferred the LP program. There was no association with the presence of DRs. When the reasons supporting the participants’ preferences were analyzed, the most frequently mentioned reason for NAL preference was greater speech clarity. The most common reason for LP preference was that the other program (NAL) was too loud.

This investigation by Dr. Cox and her colleagues indicates that high-frequency amplification was beneficial to participants with single or multiple DRs, especially for speech recognition in quiet. In noise, participants with DRs still performed better with the NAL program, though the improvement was not as marked as it was for those without DRs. In field trials, participants with DRs reported more improvement with the NAL program than the control group did, indicating that perceived benefits in everyday situations exceeded any predictions of the laboratory results. At no point in the study did high-frequency amplification reduce performance for individuals with or without DRs.

This finding is in contrast with previous reports (Vinay & Moore, 2007a; Gordo & Iorio, 2007). Cox and her colleagues note that most of the participants in their study had only one or two DRs as opposed to several contiguous DRs. They allow that their findings might not relate to the performance of participants with several contiguous DRs, but point out that among typical hearing aid candidates, it is unlikely for individuals to have more than one or two DRs. With this consideration, the authors suggest that, high frequency amplification should not be reduced, even in cases with identified dead regions.

This study from the University of Memphis provides a recommendation for use of prescribed settings and against reduction of high frequency gain for hearing aid users with one or two DRs.  They found beneficial effects of high frequency amplification in laboratory and everyday environments and noted no circumstances in which listeners demonstrated deleterious effects of high frequency amplification. These results may not pertain to individuals with several contiguous DRs but they are pertinent to the majority of typical hearing aid wearers. Their findings also support the use of subjective performance measures, as these provided additional information that was sometimes in contrast to the laboratory results. They point out that laboratory results do not always predict performance in everyday life and it can be extrapolated that clinical measures of efficacy should always be supported with subjective reports of effectiveness, like self-assessment of comfort and acceptance.

References

Baer,  T., Moore, B.C. & Kluk, K. (2002). Effects of low pass filtering on the intelligibility of speecdh in noise for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America 112(3 pt. 1), 1133-1144.

Ching, T.Y., Dillon, H. & Byrne, D. (1998). Speech recognition of hearing-impaired listeners: predictions from audibility and the limited role of high-frequency amplification. Journal of the Acoustical Society of America 103, 1128-1140.

Cox, R. M., Alexander, G.C., & Johnson, J.A. (2011). Cochlear dead regions in typical hearing aid candidates: prevalence and implications for use of high-frequency speech cues. Ear and Hearing 32, 339-348.

Cox, R.M., Johnson, J.A. & Alexander, G.C. (2012). Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss. Ear and Hearing, May 2012, e-published ahead of print.

Etymotic Research (2005). BKB-SIN Speech in Noise Test, Version 1.03. Elk Grove Village, IL: Etymotic Research.

Moore, B.C., Glasberg, B. & Vickers, D.A. (1999b). Further evaluation of a model of loudness perception applied to cochlear hearing loss. Journal of the Acoustical Society of America 106, 898-907.

Moore, B.C., Huss, M. & Vickers, D.A. (2000). A test for the diagnosis of dead regions in the cochlea. British Journal of Audiology 34, 205-224.

Moore, B.C., Glasberg, B. R. & Stone, M.A. (2004). New version of the TEN test with calibrations in dB HL. Ear and Hearing 25, 478-487.

Gordo, A. & Iorio, M.C. (2007). Dead regions in the cochlea at high frequencies: Implications for the adaptation to hearing aids. Revista Brasileira de Otorrinolaringologia 73, 299-307.

Hogan, C.A. & Turner, C.W. (1998). High frequency audibility: benefits for hearing-impaired listeners. Journal of the Acoustical Society of America 104, 432-441.

Mackersie, C.L., Boothroyd, A. & Minniear, D. (2001). Evaluation of the Computer-Assisted-Speech Perception Assessment Test (CASPA).  Journal of the American Academy of Audiology 12, 390-396.

Mackersie, C.L., Crocker, T.L. & Davis, R.A. (2004). Limiting high-frequency hearing aid gain in listeners with and without suspected cochlear dead regions. Journal of the American Academy of Audiology 15, 498-507.

Moore, B.C. (2001a). Dead regions in the cochlear: Diagnosis, perceptual consequences and implications for the fitting of hearing aids. Trends in Amplification 5, 1-34.

Moore, B.C., (2001b). Dead regions in the cochlear: Implications for the choice of high-frequency amplification. In R.C. Seewald & J.S. Gravel (Eds). A Sound Foundation Through Early Amplification, p 153-166. Stafa, Switzerland: Phonak AG.

Padilha, C., Garcia, M.V., & Costa, M.J. (2007).  Diagnosing cochlear dead regions and its importance in the auditory rehabilitation process. Brazilian Journal of Otolaryngology 73, 556-561.

Turner, C.W. (1999). The limits of high-frequency amplification. Hearing Journal 52, 10-14.

Turner, C.W. & Cummings, K.J. (1999). Speech audibility for listeners with high-frequency hearing loss. American Journal of Audiology 8, 47-56.

Vickers, D.A., Moore, B.C. & Baer, T. (2001). Effects of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America 110, 1164-1175.

Vinay, B.T. & Moore, B.C. (2007a). Prevalence of dead regions in subjects with sensorineural hearing loss. Ear and Hearing 28, 231-241.

Do Patients with Severe Hearing Loss Benefit from Directional Microphones?

Ricketts, T.A., & Hornsby, B.W.Y. (2006). Directional hearing aid benefit in listeners with severe hearing loss. International Journal of Audiology, 45, 190-197.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Hearing Technologies. This editorial does not represent the opinions of the original authors.

The benefit of directional microphones for speech recognition in noise is well established for individuals with mild to moderate hearing loss (Madison & Hawkins, 1983; Killion et al., 1998; Ricketts 2000a; Ricketts & Henry, 2002).  The potential benefit of directional microphones for severely hearing-impaired individuals is less understood and few studies have examined directional benefit when hearing loss is greater than 65dB.

Killion and Christensen (1998) proposed that listeners with severe-to-profound hearing loss may experience reduced directional benefit because they are less able to make use of speech information across frequencies. Ricketts, Henry and Hornsby confirmed this hypothesis in a 2005 study. They found an approximately 7% increase in speech recognition score per 1dB increase in directivity for listeners with moderate hearing loss, whereas listeners with severe loss achieved only an approximately 3.5% increase per 1dB increase in directivity.

In their 2005 study, Ricketts and Hornsby used individually determined SNRs and auditory-visual stimuli that allowed testing at poorer SNRs without floor effects. The authors point out that visual cues usually offer a greater benefit at poor SNRs, especially for sentence materials (Erber, 1969; Sumby & Pollack, 1954; MacLeod & Summerfield, 1987).  Individuals rely more on visual cues in poorer SNRs, visual information that provides complementary, non-redundant cues is most beneficial (Grant, 1998; Walden et al., 1993).

The purpose of their study was to examine potential directional benefit for severely hearing-impaired listeners at multiple SNRs in auditory-only and auditory-visual conditions. Directional and omnidirectional performance in quiet conditions were also tested to rule out performance differences between microphone modes that could be attributed to reduction of environmental noise by the directional microphone. Finally, it was determined if performance in quiet conditions would significantly exceed performance in highly positive SNRs. Though significant improvement in SNRs more favorable than +15 dB is usually not expected, some research suggests that hearing-impaired individuals may experience additional benefit from more favorable SNRs (Studebaker et al, 1999).

Twenty adult participants with severe-to-profound sensorineural hearing loss participated in the study. All participants used oral communication, had at least nine years of experience with hearing aids and had pure tone average hearing thresholds greater than 65dB.  Participants were fitted with power behind-the-ear hearing aids with full shell, unvented earmolds. Digital noise reduction and feedback management was turned off. The directional program was equalized, so that gain matched the omnidirectional mode as closely as possible.

The Audio/Visual Connected Speech Test (CST; Cox et al, 1987), a speech recognition test with paired passages of connected speech, was presented to listeners on DVD. Speech was presented at a 0-degree azimuth angle and uncorrelated competing noise was presented via five loudspeakers surrounding the listener. Testing took place in a sound booth with reflective panels to approximate common levels of reverberation in everyday situations.

Baseline SNRs were obtained for each subject in auditory-only and auditory-visual conditions, at a level that was near, but not reaching floor performance. Speech recognition testing was conducted for omnidirectional and directional conditions at baseline SNR, baseline + 4dB and baseline + 8dB. Presentation SNRs ranged from 0dB to +24dB for auditory-only conditions and from -6dB to +18dB for auditory-visual conditions. Listeners were tested with auditory-only stimuli in quiet conditions, for omnidirectional and directional modes. Testing in quiet was not performed with auditory-visual stimuli, as performance was expected to approach ceiling performance levels.

The multiple SNR levels were achieved with two different methodologies. Half of the participants listened to a fixed noise level of 60dB SPL and speech levels were varied to achieve the desired SNRs. The remaining participants listened to a fixed speech level of 67dB SPL and the noise levels were adjusted to reach the desired SNR levels. Data analysis revealed no significant differences between these two test methodologies for any of the variables, so their data was pooled for subsequent analyses.

The results showed significant main effects for microphone mode (directional versus omni), SNR and presentation condition (auditory-only versus auditory-visual). There were significant interactions between microphone mode and SNR, as well as between presentation condition and SNR.  Each increase in SNR resulted in significantly better performance for both omnidirectional and directional modes. Performance in directional mode was significantly better than omnidirectional for all SNR levels. The authors pointed out that auditory-visual performance at all three SNRs was always better than auditory-only, despite the fact that the absolute SNRs for auditory-visual conditions were lower than the equivalent auditory-only conditions, by an average of 5dB.  The authors interpreted this finding as strong support for the benefit of visual cues for speech recognition in adverse conditions.

When the effects of directionality and SNR were analyzed separately for auditory-only and auditory-visual conditions, they found that directional performance was significantly better than omnidirectional performance for all auditory-visual conditions. In auditory-only conditions, directionality only had a significant effect at the baseline SNR, but not in the baseline +4dB, baseline +8dB or quiet conditions.

Perhaps not surprisingly, Ricketts and his colleagues found that the addition of visual cues offered their severely-impaired listeners a significant advantage for understanding connected speech. When they compared the auditory-only and auditory-visual scores at equivalent SNR levels, they determined that participants achieved an average improvement of 22% with the availability of visual cues. This finding is in agreement with previous research that found a visual advantage of 24% for listeners with moderate hearing loss (Henry & Ricketts, 2003).

Also not surprisingly, performance improved with increases in signal to noise ratio.  For the auditory-only condition, they found an average improvement of 1.6% per dB and 2.7% per dB for the omnidirectional and directional modes, respectively. For the auditory-visual condition, there was an improvement of 3.7% per dB for omnidirectional mode and 3.1% per dB for directional mode.  Furthermore, they found an additional performance increase of 8% for directional mode and 12% for omnidirectional mode when participants were tested in quiet conditions. This was somewhat surprising given previous research based on the articulation index (AI) that suggested maximal performance could be expected at SNRs of approximately +15dB.  The absolute SNR for the baseline +8dB condition was 14.7dB, so further improvements in quiet conditions support the suggestion that hearing-impaired listeners experience increased improvement for SNRs up to +20dB (Studebaker et al, 1999; Sherbecoe & Studebaker, 2002).

The benefit of visual cues was not specifically addressed by this study because it did not compare auditory-only and auditory-visual performance at the same SNR levels. However, the discovery that visual cues improved performance even when the SNRs were approximately 5dB poorer was strong support for the benefit of visual information for speech recognition in noisy environments. This underscores the recommendation that severely hearing-impaired listeners should always be counseled to take advantage of visual cues whenever possible, especially in adverse listening conditions. Although visual cues cannot completely counterbalance the auditory cues lost to hearing loss and competing noise, they supply additional information that can help the listener identify or differentiate phonemes, especially in connected speech containing semantic and syntactic context. In conversational situations, visual cues include not just lip-reading but also the speaker’s gestures, expressions and body language. All of these cues can aid speech recognition, so hearing-impaired individuals as well as their family members should be trained in strategies to maximize the availability of visual information.

Ricketts and Hornsby’s study supports the potential benefit of directional microphones for individuals with severe hearing loss. Many hearing aid users with severe-to-profound loss have become accustomed to the use of omnidirectional microphones and may be resistant to directional microphones, especially automatic directionality, if it is in the primary program of their hearing instruments. One strategy for addressing these cases is to program the hearing aid’s primary memory as full-time omnidirectional while programming a second, manually accessed, memory with a full-time directional microphone. This way the listener is able to choose when and how they use their directional program and may be less likely to experience unexpected and potentially disconcerting changes in perceived loudness and sound quality.

In addition to providing evidence for the benefit of visual cues and directionality, the findings of this study can be extrapolated to support the use of FM and wireless accessories. The fact that performance in quiet conditions was still significantly better than the next most favorable SNR (14.7dB) shows that improving SNR as much as possible provides demonstrable advantages for listeners with severe hearing loss. Even for individuals who do well with their hearing instruments overall, wireless accessories that stream audio directly to the listeners hearing instruments may further improve understanding. These accessories improve SNR by reducing the effect of room acoustics and reverberation, as well as reducing the effect of competing noise and distance between the sound source and the listener. Most modern hearing instruments are compatible with wireless accessories so hearing aid evaluations should always include discussion of their potential benefits. These devices work with a wide range of hearing aid styles, do not require the use of an adapter or receiver boot and are much less expensive than an FM system.

Ricketts and Hornsby’s study underscores the importance of visual information and directionality for speech recognition in noisy environments and illuminates ways in which clinicians can help patients with severe-to-profound loss achieve improved communication in everyday circumstances. Modern technologies such as directional processing and wireless audio streaming accessories can be effective tools for improving SNRs in everyday situations that may otherwise challenge or overwhelm the listener with severe to profound hearing loss.

References

Erber, N.P. (1969). Interaction of audition and vision in the reception of oral speech stimuli. Journal of Speech and Hearing Research 12, 423-425.

Grant, K.W., Walden, B.E. & Seitz, P.F. (1998). Auditory-visual speech recognition by hearing-impaired subjects: consonant recognition, sentence recognition and auditory-visual integration. Journal of the Acoustical Society of America 103, 2677-2690.

Henry, P. & Ricketts, T. A. (2003).  The effect of head angle on auditory and visual input for directional and omnidirectional hearing aids. American Journal of Audiology 12(1), 41-51.

Killion, M. C., Schulien, R., Christensen, L., Fabry, D. & Revit, L. (1998). Real world performance of an ITE directional microphone. Hearing Journal 51(4), 24-38.

Killion, M.C. & Christensen, L. (1998). The case of the missing dots: AI and SNR loss. Hearing Journal 51(5), 32-47.

MacLeod, A. & Summerfield, Q. (1987). Quantifying the contribution of vision to speech perception in noise. British Journal of Audiology 21, 131-141.

Madison, T.K. & Hawkins, D.B. (1983). The signal-to-noise ratio advantage of directional microphones. Hearing Instruments 34(2), 18, 49.

Pavlovic, C. (1984). Use of the articulation index for assessing residual auditory function in listeners with sensorineural hearing impairment. Journal of the Acoustical Society of America 75, 1253-1258.

Pavlovic, C., Studebaker, G. & Scherbecoe, R. (1986). An articulation index based procedure for predicting the speech recognition performance of hearing-impaired individuals. Journal of the Acoustical Society of America 80(1), 50-57.

Ricketts, T.A. (2000a). Impact of noise source configuration on dire3ctional hearing aid benefit and performance. Ear and Hearing 21(3), 194-205.

Ricketts, T.A. & Henry, P. (2002). Evaluation of an adaptive directional microphone hearing aid. International Journal of Audiology 41(2), 100-112.

Ricketts, T., Henry, P. & Hornsby, B. (2005). Application of frequency importance functions to directivity for prediction of benefit in uniform fields. Ear & Hearing 26(5), 473-86.

Studebaker, G., Sherbecoe, R., McDaniel, D. & Gwaltney, C. (1999). Monosyllabic word recognition at higher-than-normal speech and noise levels. Journal of the Acoustical Society of America 105(4), 2431-2444.

Sherbecoe, R.L., Studebaker, G.A. (2002). Audibility-index functions for the Connected Speech Test. Ear & Hearing 23(5), 385-398.

Sumby, W.H. & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26, 212-215.

Walden, B.E., Busacco, D.A. & Montgomery, A.A. (1993). Benefit from visual cues in auditory-visual speech recognition by middle-aged and elderly persons. Journal of Speech and Hearing Research 36, 431-436.

Transitioning the Patient with Severe Hearing Loss to New Hearing Aids

Convery, E., & Keidser, G. (2011). Transitioning hearing aid users with severe and profound loss to a new gain/frequency response: benefit, perception and acceptance. Journal of the American Academy of Audiology. 22, 168-180.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Hearing Technologies. This editorial does not represent the opinions of the original authors. 

Many individuals with severe-to-profound hearing loss are full-time, long-term hearing aid users. Because they rely heavily on their hearing aids for everyday communication, they are often reluctant to try new technology. It is common to see patients with severe hearing loss keep a set of hearing aids longer than those with mild-to-moderate losses. These older hearing aids offered less effective feedback suppression and a narrower frequency range than those available today now. The result was that many severely-impaired hearing aid users were fitted with inadequate high-frequency gain and compensatory increases in low-mid frequency amplification.  Having adapted to this frequency response, they may reject new hearing aids with increased high-frequency gain, stating that they sound too tinny or unnatural. Similarly, those who have adjusted to linear amplification may reject wide-dynamic-range compression (WDRC) as too soft, even though it the strategy may provide some benefits when compared to their linear hearing aids.

Convery and Keidser evaluated a method to gradually transition experienced, severely impaired hearing aid users into new amplification characteristics. They measured subjective and objective outcomes as subjects took incremental steps toward a more appropriate frequency response. Twenty-three experienced, adult hearing aid users participated in the study.   Participation was limited to subjects whose current gain and frequency response differed significantly from targets based on NAL-RP, a modification of the NAL formula for severe to profound hearing losses (Byrne, et al 1991).  Most subjects’ own instruments had more gain at 250-2000Hz and less gain at 6-8 kHz compared to NAL-RP targets, so the experimental transition involved adapting to less low and mid-frequency gain and more high frequency gain.

Subjects in the experimental group were fitted bilaterally with WDRC behind-the-ear hearing instruments. Directional microphones, noise reduction and automatic features were turned off and volume controls were activated with an 8dB range. The hearing aids had two programs: the first, called the “mimic” program,  had a gain/frequency response adjusted to match the subject’s current hearing aids. The second program was set to NAL-RP targets.  MPO was the same for mimic and NAL-RP programs. The programs were not manually accessible for the user, they were only adjusted by the experimenters at test sessions.

Four incremental programs were created for each participant in the experimental group. Each step was approximately a 25% progression from their mimic program frequency response to the NAL-RP prescribed response. At 3 week intervals, they were switched to the next incremental program, approaching NAL-RP settings as the experiment progressed.  The programs in the control group’s hearing aids remained consistent for the duration of the study.

All subjects attended 8 sessions. At the initial session, subjects’ own instruments were measured in a 2cc coupler and RECD measurements were obtained with their own earmolds. The experimental hearing aids were fitted at the next session and subjects returned for follow-up sessions at 1 week post-fitting and 3 week intervals thereafter until 15-weeks post-fitting.

Subjects evaluated the mimic and NAL-RP programs in paired comparisons at 1 week and 15 weeks post-fitting. The task used live dialogues with female talkers in four everyday environments: café, office, reverberant stairwell and outdoors with traffic noise in the background. Hearing aid settings were switched from mimic to NAL-RP with a remote control, without audible program change beeps, so subjects were unaware of their current program. They were asked to indicate their preference for one program over the other on a 4-point scale: no difference, slightly better, moderately better or much better.

Speech discrimination was evaluated with the Beautifully Efficient Speech Test (BEST; Schmitt, 2004) which measured the aided SRT for sentence stimuli. Loudness scaling was then conducted to determine the most comfortable loudness level and range (MCL/R).  Finally, subjects responded to a questionnaire concerning overall loudness comfort, speech intelligibility, sound quality, use of the volume control, use of their own hearing aids and perceived changes in audibility and comfort.  Speech discrimination, loudness scaling and questionnaire administration took place for all participants at 3 week intervals, starting at the 3 week post-fitting session.

One goal of the study was to determine if there would be a change in speech discrimination over time or a difference between the experimental and control groups. Analysis of BEST SRT scores yielded no significant difference between the experimental and control groups, nor was there a significant change in SRT over time. There was a significant interaction between these variables, indicating that the experimental group demonstrated slightly poorer SRT scores over time, whereas the control group’s SRTs improved slightly over time.

Subjects rated perceptual disturbance, or how much the hearing aid settings in the current test period differed from the previous period and how disturbing the difference was. There was no significant effect for the experimental or control groups, but there was a tendency for reports of perceptual disturbance over time to decrease for the control group and increase for the experimental group. The mimic programs for the control group were consistent, so control subjects likely became acclimated over time. The experimental group, however, had incremental changes to their mimic program at each session, so it is not surprising that they reported more perceptual disturbance. This was only a slight trend, however, indicating that even the experimental group experienced relatively little disturbance as their hearing aids  approached NAL-RP targets.

Analysis of the paired comparison responses indicated a significant overall preference for the mimic program over the NAL-RP program. There was an interaction between environment and listening program, showing a strong preference for the mimic program in office and outdoor environments and somewhat less of a preference in the café and stairwell environments. When asked about their criteria for the comparisons, subjects most commonly cited speech clarity, loudness comfort and naturalness, regardless of whether mimic fit or NAL-RP was preferred.  There was no significant effect of time on program preference, but there was a slight increase in the control group’s preference for mimic at the end of the study, whereas the experimental group shifted slightly toward NAL-RP, away from mimic.

Over the course of the study, Convery and Keidser’s subjects demonstrated acceptance of new frequency responses with less low- to mid-frequency gain and more high frequency gain than their current hearing aids. No significant differences were noted between experimental and control groups for loudness, sound quality, voice quality, intelligibility or overall performance, nor did these variables change significantly over time. Though all subjects preferred the mimic program overall, there was a trend for the experimental group to shift slightly toward a preference for the NAL-RP settings, whereas the control group did not. This indicates that the experimental subjects had begun to acclimate to the new, more appropriate frequency response. Acclimatization might have continued to progress, had the study examined performance over a longer period of time. Prior research indicates that acclimatization to new hearing aids can progress over the course of several months and individuals with moderate and severe losses may require more time to adjust than individuals with milder losses (Keidser et al, 2008).

Reports of perceptual disturbance increased as incremental programs approached NAL-RP settings. This may not be surprising to clinicians, as hearing aid patients often require a period of acclimatization even after relatively minor changes to their hearing aid settings. Furthermore, clinical observation supports the suggestion that individuals with severe hearing loss may be even more sensitive to small changes in their frequency response. Allowing more than three weeks between program changes may result in less perceptual disturbance and easier transition to the new frequency response. Clinically, perceptual disturbance with a new frequency response can also be mitigated by counseling and encouraging patients that they will feel more comfortable with the new hearing aids as they progress through their trial periods.  It might also be helpful to extend the trial period (which is usually 30-45 days) for individuals with severe to profound hearing losses, to accommodate an extended acclimatization period.

Individuals with severe-to-profound hearing loss often hesitate to try new hearing aids.  Similarly, audiologists may be reluctant to recommend new instruments with WDRC or advanced features for fear that they will be summarily rejected. Convery and Keidser’s results support a process for transitioning experienced hearing aid users into new technology and suggest an alternative for clinicians who might otherwise hesitate to attempt departures from a patient’s current frequency response.

Because this was a double-blind study, the research audiologists were unable to counsel subjects as they would in a typical clinical situation.  The authors note that counseling during transition is of particular importance for severely impaired hearing aid users, to ensure realistic expectations and acceptance of the new technology. Though the initial fitting may approximate the client’s old frequency response, follow-up visits at regular intervals should slowly implement a more desirable frequency response.  Periodically, speech discrimination and subjective responses should be evaluated and the transition should be stopped or slowed if decreases in intelligibility or perceptual disturbances are noted.

In addition to changes in the frequency response, switching to new hearing aid technology usually means the availability of unfamiliar features such as directional microphones, noise reduction and many wireless features. Special features such as these can be introduced after the client acclimates to the new frequency response, or they can be relegated to alternate programs to be used on an experimental basis by the client. For instance, automatic directional microphones are sometimes not well-received by individuals who have years of experience with omnidirectional hearing aids. By offering directionality in an alternate program, the individual can test it out as needed and may be less likely to reject the feature or the hearing aids.  It is critical to discuss proper use of the programs and to set up realistic expectations.  Because variable factors such as frequency resolution and sensitivity to incremental amplification changes may affect performance and acceptance, the transition period should be tailored to the needs of the individual and monitored closely with regular follow-up appointments.

References

Baer, T., Moore, B.C.J. & Kluk, K. (2002).  Effects of low pass filtering on the intelligibility of speech in noise for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America. 112, 1133-1144.

Barker, C., Dillon, H. & Newall, P. (2001). Fitting low ratio compression to people with severe and profound hearing losses. Ear and Hearing. 22, 130-141.

Byrne, D., Parkinson, A. & Newall, P. (1991).  Modified hearing aid selection procedures for severe/profound hearing losses. In: Studebaker, G.A. , Bess, F.H., Beck, L. eds. The Vanderbilt Hearing Aid Report II. Parkton, MD: York Press, 295-300.

Ching, T.Y.C., Dillon, H., Lockhart, F., vanWanrooy, E. & Carter, L. (2005). Are hearing thresholds enough for prescribing hearing aids? Poster presented at the 17th Annual American Academy of Audiology Convention and Exposition, Washington, DC.

Convery, E. & Keidser, G. (2011). Transitioning hearing aid users with severe and profound loss to a new gain/frequency response: benefit, perception and acceptance. Journal of the American Academy of Audiology. 22, 168-180.

Flynn, M.C., Davis, P.B. & Pogash, R. (2004). Multiple-channel non-linear power hearing instruments for children with severe hearing impairment: long-term follow-up. International Journal of Audiology. 43, 479-485.

Keidser, G., Hartley, D. & Carter, L. (2008). Long-term usage of modern signal processing by listeners with severe or profound hearing loss: a retrospective survey. American Journal of Audiology. 17, 136-146.

Keidser, G., O’Brien, A., Carter, L., McLelland, M., and Yeend, I. (2008) Variation in preferred gain with experience for hearing-aid users. International Journal of Audiology. 47(10), 621-635.

Kuhnel, V., Margolf-Hackl, S. & Kiessling, J. (2001).  Multi-microphone technology for severe to profound hearing loss. Scandanavian Audiology 30 (Suppl. 52), 65-68.

Moore, B.C.J. (2001). Dead regions in the cochlea: diagnosis, perceptual consequences and implications for the fitting of hearing aids. Trends in Amplification. 5, 1-34.

Moore, B.C.J., Killen, T. & Munro, K.J. (2003). Application of the TEN test to hearing-impaired teenagers with severe-to-profound hearing loss. International Journal of Audiology. 42, 465-474.

Schmitt, N. (2004). A New Speech Test (BEST Test). Practical Training Report. Sydney: National Acoustic Laboratories.

Vickers, D.A., Moore, B.C.J. & Baer, T. (2001). Effect of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America. 110, 1164-1175.

Prescribing Compression for Severe Hearing Loss

Souza, P.E., Jenstad, L.M. & Folino, R. (2005). Using multichannel wide-dynamic range compression in severely hearing-impaired listeners: effects on speech recognition and quality. Ear and Hearing 26(2), 120-131.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Hearing Technologies and does not reflect the opinions of the authors.

Most modern hearing aids feature multiple signal processing channels and wide dynamic range compression (WDRC). For listeners with mild to moderate hearing loss, WDRC can offer improvement in speech intelligibility and sound quality in quiet conditions (Souza, 2002). Small benefits of WDRC over linear amplification have also been observed in the presence of background noise. (Moore, et al., 1999).  Most of the research on WDRC has been on individuals with mild or moderate hearing loss. There is considerably less data available on WDRC performance for severely hearing-impaired participants. In fact, many clinical practitioners believe that patients with greater amounts of hearing loss prefer, and benefit most, from linear amplification.

Severe hearing loss is accompanied by reduced frequency selectivity (Faulkner, et al., 1990; Rosen, et al., 1990) and temporal resolution (Lamore, et al., 1990; Nelson & Freyman, 1987). Beyond audibility constraints of hearing loss alone, these impairments further limit ability to identify and discriminate speech cues. Because spectral cues may be limited or unavailable, severely impaired listeners rely on other cues such as variations in the speech amplitude envelope over time (Rosen et al, 1990). For these reasons, many compressor designs are constrained to minimally degrade the speech signal.

One particular concern about WDRC circuitry is that natural variations in speech amplitude may be altered, reducing the availability of amplitude-related cues. The result can be degradation in consonant perception (Souza & Turner, 1998) or overall sentence recognition (Souza & Kitch, 2001b; Stone & Moore, 2003; VanTasell & Trine, 1996).  The reduction of amplitude cues, in combination with impaired frequency selectivity, could result in poor performance for severely hearing-impaired individuals using WDRC hearing aids.  Indeed, in a study with severely hearing-impaired participants, Souza and Bishop (1999) found smaller improvements in sentence recognition for WDRC amplification than for linear amplification.  DeGennaro, et al., (1986) also found no advantage with a WDRC system for severely hearing-impaired listeners, despite the fact that the compression system provided improved audibility over the linear system.

In contrast, other studies suggest WDRC benefits for individuals with severe hearing loss. Barker, et al., (2001) found that listeners with severe hearing loss preferred single channel WDRC to either output compression limiting or peak clipping.  Although user preference is a critical element of a successful hearing aid fitting, it is inarguably more important to ensure adequate speech recognition.  While many prescriptive formulas attempt to find a balance, there is no consensus on the most appropriate amplification characteristics to ensure speech recognition and acceptable sound quality for hearing aid users with severe hearing loss.

The purpose of Souza et al.’s study was to examine speech recognition performance and speech quality judgments for severely hearing-impaired listeners using four different types of amplification:

1.  Linear with peak clipping

2.  Linear with output compression limiting

3.  Two-channel WDRC

4.  Three-channel WDRC

Thirteen participants with severe, sensorineural hearing loss and most of who had previous hearing aid experience, participated in the study. Seven participants with normal hearing also participated as a control group.

Speech recognition was evaluated using the Nonsense Syllable Test (NST; Resnick, et al., 1976).  Speech quality judgments were obtained with a paired comparison task, using sentence stimuli from the Connected Speech Test (CST) (Cox, et al., 1987).  For each stimulus pair, participants heard the same sentence processed in two different amplification conditions and were asked to select the one they preferred. Listeners were specifically instructed to avoid using loudness as a primary criterion for their preference.

Speech materials were process offline with a master hearing aid simulation of each amplification type described above, signal presentation was at 70dB SPL via an ER-2 insert earphone.  Frequency and gain response was determined based on the mean audiometric thresholds of the participant group.  The targets themselves represented an average of the NAL-RP (Byrne et al., 1990) and NAL-NL1 (Dillon, 1999) targets for conversational speech.

As would be expected, the speech recognition scores for normal-hearing participants were high and equivalent for all test conditions.  Hearing-impaired participants showed poorer performance for peak clipping and multichannel WDRC conditions than for compression limiting. The only statistically significant difference was between compression limiting and 3-channel WDRC.

A more detailed feature analysis was conducted to determine which speech features were poorly transmitted by the 3-channel WDRC system, including place, voicing and manner cues. Place cues were not transmitted effectively by any of the amplification systems, probably because the hearing-impaired participants’ had poor frequency resolution and place is mainly transmitted by spectral cues (Rosen, 1992).  The two WDRC amplification types preserved voicing information slightly more than compression limiting or peak clipping.  Of primary interest was manner of articulation, as it is primarily transmitted via amplitude envelope cues (Rosen, 1992).  There is an expectation that amplitude envelope is more distorted by fast-acting WDRC than either peak clipping or output compression limiting.

Detailed analysis of manner transmission revealed that amplification type affected phoneme categories differently. Fricatives were fairly well preserved by compression limiting and WDRC systems but were less well transmitted by the peak clipping system.  The 2-channel WDRC system transmitted nasality better than peak clipping or compression limiting, but the 3-channel WDRC did not preserve nasality cues as well. Because nasality is primarily transmitted via the low-frequency nasal murmur (Kent & Read, 1992) the channel crossover in the 3-channel system may have disrupted this cue.  Affricates, a combination of a stop and fricative, were best preserved by linear amplification – peak clipping or compression limiting – and were negatively affected by WDRC. The authors surmised that poor affricate identification may have been related to the time parameters of their WDRC algorithm, causing amplitude decreases after the stop to be perceived as bursts, resulting in misidentification of affricates as stop consonants.

Speech quality ratings by the hearing-impaired participants showed a clear preference for compression limiting. There were significant differences between all comparisons with the exception of peak clipping versus two-channel WDRC. No normal-hearing participants preferred peak clipping and their preferences were equally divided among the other options.

Overall, the results of this study indicate poorer speech recognition and sound quality for WDRC compared to compression limiting. Previous work indicated that compression benefit decreased as pure tone thresholds exceeded 70dBHL (Goedegebure, et al., 2001; Souza & Bishop, 1999), and Boothroyd et al., (1988) concluded that amplitude distortions were responsible for poor performance with a two-channel, fast-acting compression system.  These assumptions may be supported by the current work, in which stop-affricate confusions were considered to be related to misinterpretation of amplitude cues.

Though compression limiting yielded better speech recognition and sound quality in the present study, the authors caution that this should not preclude the use of WDRC systems with severely hearing-impaired individuals. For example, they acknowledge that the short release times they used may have affected amplitude envelope cues. Because of the wide variability in parameters among current WDRC circuits, there will be variability in their effect on speech envelope cues as well. Preservation of speech amplitude envelope is likely to aid word recognition and clarity for users with severe loss and may be achieved with longer attack and release times (Kuk & Ludvigsen, 2000).  Alternatively, shorter attack and release times, combined with lower compression ratios may also reduce the detrimental effects of WDRC on amplitude cues.

Souza and her colleagues point out that the frequency responses they used in the study may have deviated from individually prescribed targets, compromising performance. They applied their prescriptive formula to the average hearing loss of their participant population, which may not have provided as close a match to target as would be achieved in a typical hearing aid fitting. In general, their frequency responses over-amplified slightly in the low frequencies and under-amplified in the high frequencies.  They used a fixed compression ratio of 3:1 across channels for all participants, higher than those prescribed by prescriptive procedures for clinical use; additionally, the use of one ratio across channels does not represent the typical prescription.  These deviations from clinically prescribed settings may have affected the speech recognition scores and sound quality preferences in their study.

The hearing-impaired individuals in this study clearly preferred the sound quality of compression limiting over the peak clipping or WDRC options. The improved audibility of high-frequency information with the WDRC systems may have adversely affected sound quality for some listeners, especially those not accustomed to high frequency emphasis. Clinicians are well aware that when individuals wear new hearing instruments with better high-frequency amplification, their initial description of the sound quality is “tinny” or “metallic”. It can take days or even weeks for them to adjust to the new frequency response, before the increased high frequency audibility is received favorably.

Similarly, many experienced hearing aid users react negatively to WDRC because it sounds softer than the linear amplification they have become accustomed to. Participants in this study were specifically instructed to disregard loudness in their sound quality judgments, but the relative decrease in loudness in the WDRC trials may have affected their preferences anyway. For experienced users of powerful hearing instruments, a decrease in loudness can be interpreted initially as a decrease in performance, despite accompanying improvements in speech recognition ability.

While educative, these findings are based on atypical compressor parameters; thus, applying them directly to clinical decision making should be done with caution. Beyond the considerations of gain and time constants, sound quality judgments based on everyday performance might be very different, especially after use over a longer period of time.  Hearing aid users are known to experience a period of acclimatization to new hearing aids, the duration of which varies depending on their degree of loss and prior user of amplification (Keidser et al., 2009).  It is reasonable to assume that a WDRC circuit providing superior high-frequency audibility, though initially perceived as “too soft” or “too tinny”, would sound more natural and clear after an extended period of consistent use.

Souza and her colleagues address an important issue for clinicians fitting individuals with severe to profound hearing loss.  As they acknowledge, more study is needed regarding specific WDRC parameters and how they affect speech recognition and preferences in the severely hearing-impaired population.  Of particular interest is how WDRC parameters perform over time in everyday listening situations, outside of the laboratory.  Findings, such as these, illustrate how objective and subjective measures reveal discrete aspects of a patient’s experience, proving information that helps clinicians determine the most appropriate starting point for these individuals.

References

Barker, C., Dillon, H. & Newall, P. (2001). Fitting low ratio compression to people with severe and profound hearing losses. Ear and Hearing 22, 130-141.

Byrne, D., Parkinson, A. & Newall, P. (1990). Hearing aid gain and frequency response requirements for the severely-profoundly hearing impaired. Ear and Hearing 11, 40-49.

Cox, R.M., Alexander, G.C. & Gilmore, C. (1987). Development of the Connected Speech Test (CST). Ear and Hearing 8 (Suppl.), 119S-126S.

DeGennaro, S., Braida, L.D. & Durlach, N.I. (1986). Multichannel syllabic compression for severely impaired listeners. Journal of Rehabilitation Research and Development 23, 17-24.

Dillon, H. (1999). NAL-NL1: A new prescriptive fitting procedure for non-linear hearing aids. Hearing Journal 52, 10-16.

Faulkner, A., Rosen, S. & Moore, B.C.J. (1990). Residual frequency selectivity in the profoundly hearing-impaired listener. British Journal of Audiology 24, 381-392.

Humes, L.E., Christensen, L., Thomas, T., Bess, F., Hedley-Williams, A. & Bentler, R. (1999). A comparison of the aided performance and benefit provided by a linear and a two-channel wide dynamic range compression hearing aid. Journal of Speech, Language and Hearing Research 42, 65-79.

Keidser G., O’Brien, A., Carter, L., McLelland, M. & Yeend, I. (2008). Variation in preferred gain with experience for hearing-aid users. International Journal of Audiology, 47(10), 621-35.

Kent, R.D. & Read, C. (1992). The acoustic analysis of speech. San Diego: Singular Publishing Group.

Kuk, F. & Ludvigsen, C. (2000). Hearing aid design and fitting solutions for persons with severe-to-profound losses. Hearing Journal 53, 29-37.

Lamore, P.J., Verweij, C. & Brocaar, M.P. (1990). Residual hearing capacity of severely hearing-impaired participants. Acta Otolaryngologica Supplement 469, 7-15.

Moore, B.C.J., Peters, R.W. & Stone, M.A. (1999). Benefits of linear amplification and multi-channel compression for speech comprehension in backgrounds with spectral and temporal dips. Journal of the Acoustical Society of America 105, 400-411.

Nelson, D.A. & Freyman, R.L. (1987). Temporal resolution in sensorineural hearing-impaired listeners. Journal of the Acoustical Society of America 81, 709-720.

Resnick, S.B., Dubno, J.R., Hoffnung, S. & Levitt, H. (1976). Phoneme errors on a nonsense syllable test. Journal of the Acoustical Society of America 58 (Suppl. 1), 114.

Rosen, S. (1992). Temporal information in speech: Acoustic, auditory and linguistic aspects. Philosophical transactions of the Royal Society of London Series B: Biological Sciences 336, 367-373.

Rosen, S., Faulkner, A. & Smith, D.A. (1990). The psychoacoustics of profound hearing impairment. Acta Otolaryngologica Supplement 469, 16-22.

Souza, P.E. (2002). Effects of compression on speech acoustics, intelligibility and sound quality. Trends in Amplification 6, 131-165.

Souza, P.E. & Bishop, R. (1999). Improving speech audibility with wide dynamic range compression in listeners with severe sensorineural hearing loss. Ear and Hearing 20, 461-470.

Souza, P.E., Jenstad, L.M. & Folino, R. (2005). Using multichannel wide-dynamic range compression in severely hearing-impaired listeners: effects on speech recognition and quality. Ear and Hearing 26(2), 120-131.

Souza, P.E. & Kitch, V.J. (2001b). The contribution of amplitude envelope cues to sentence identification in young and aged listeners. Ear and Hearing 22, 112-119.

Souza, P.E. & Turner, C. W. (1998). Multichannel compression, temporal cues and audibility. Journal of Speech and Hearing Research 41, 315-326.

Stone, M.A. & Moore, B.C.J. (2003). Effect of the speed of a single-channel dynamic range compressor on intelligibility in a competing speech task. Journal of the Acoustical Society of America 114, 1023-1034.

VanTasell, D. J. & Trine, T.D. (1996). Effect of single-band syllabic amplitude compression on temporal speech information in nonsense syllables and in sentences. Journal of Speech and Hearing Research 39, 912-922.

Understanding the NAL-NL2

Keidser, G., Dillon, H., Flax, M., Ching, T. & Brewer, S. (2011). The NAL-NL2 prescription procedure. Audiology Research 1 (e24), 88-90.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

For years, the National Acoustics Laboratory’s NAL-NL1 has been the benchmark for compressive, independently derived, prescriptive formulas (Dillon, 1999). The recently introduced NAL-NL2 advances their original formula with knowledge gained from a wealth of empirical data collected with NAL-NL1 (Keidser et al., 2011). Similarities between NAL-NL1 and NAL-NL2 include: primary goals of maximizing speech intelligibility while not exceeding overall normal loudness at a range of input levels and the use of predictive models for speech intelligibility and loudness (Moore & Glasberg, 1997; 2004).

The speech intelligibility model used in both NAL-NL1 and NAL-NL2 differs from the Speech Intelligibility Index (SII; ANSI, 1997). The ANSI SII assumes that, regardless of hearing loss, speech should be fully understood when all speech components are audible. Included in NAL-NL1 is a modification to the SII proposed by Ching and colleagues (1998). This modification or effective audibility factor assumes that as hearing loss becomes more severe less information can be extracted from the speech signal. More recent data have been collected to derive an updated effective audibility factor for use with NAL-NL2 (Keidser et al, 2011).

The NAL-NL2 formula includes constraints to prevent compression ratios from exceeding a maximum value for a given frequency or degree of hearing loss. Modifications were based on data suggesting that users with severe or profound hearing loss prefer lower compression ratios than those prescribed by NAL-NL1, when fitted with fast-acting compression (Keidser, et al 2007). However, there is evidence to suggest that higher compression ratios could be used in this population with slow-acting compression. Therefore, in the case of severe or profound hearing losses, the new formula prescribes lower compression ratios for fittings with fast-acting compression than those with slow-acting compression. For mild and moderate losses, compression speed does not affect prescribed compression ratios.

Based on experimental outcomes with NAL-NL1 fittings, the development of NAL-NL2 took various attributes of the hearing aid user into consideration, such as gender, binaural listening, experience level, age and language. In the case of gender, Keidser and Dillon (2006) studied the real-ear insertion gain measurements for the preferred frequency responses of 187 adults, finding that regardless of experience or degree of hearing loss, female participants preferred an average of 2dB less gain than male participants. As a result, gender differences are factored into each fitting.

The NAL-NL2 method still prescribes greater gain for monaural fittings than it does for binaural fittings. This difference is similar to the NAL-NL1 formula (Dillon et al, 2010). Recent studies suggest that the binaural to monaural loudness difference may be less than previously indicated (Epstein & Florentine, 2009). For symmetrical hearing losses, the binaural difference ranges from 2dB for inputs below 50dB SPL to 6dB for inputs above 90dB SPL, so binaurally fitted users will have higher prescribed compression ratios than monaural users. For asymmetrical losses, the binaural correction decreases as the asymmetry increases.

Experience with hearing aids as it relates to degree of hearing loss is a consideration in the NAL-NL2 formula. Keidser and her colleagues (2008) found that with increasing severity of hearing loss new users prefer progressively less prescribed gain than experienced hearing aid users. Although this observation does not agree with several other studies (e.g. Convery et al., 2005; Smeds et al., 2006), the NAL-NL2 recommends gain adaptation for new hearing aid users with moderate or severe hearing loss. Further details of this discrepancy will be addressed in future publication (Keidser 2012, personal communication)

The developers of the NAL-NL2 formula determined that adults with mild to moderate hearing loss preferred less overall gain for 65dB inputs than would be prescribed by NAL-NL1 (Keidser et al., 2008). This is corroborated by other studies (Smeds et al, 2006; Zakis et al, 2007) in which hearing aid users with mild to moderate hearing loss preferred less gain for high and low level inputs. These reports indicate that participants generally preferred slightly less gain and higher compression ratios than those prescribed by NAL-NL1, a preference that was incorporated into the revised prescriptive procedure.

The NAL-NL2 also takes the hearing aid user’s language into consideration. For speakers of tonal languages slightly more low-frequency gain is prescribed. Increased gain in the low frequency region more effectively conveys fundamental frequency information, an especially important cue for recognition of tonal languages.

Like its predecessor, the NAL-NL2 fitting formula leverages theoretical models of intelligibility and loudness perception to maximize speech recognition without exceeding normal loudness. The revised formula takes into consideration a number of factors other than audiometric information and benefits from extensive empirical data collected using NAL-NL1. Ultimately, the NAL-NL2 procedure results in a slightly flatter frequency response, with relatively more gain across low and high frequencies and less gain in the mid-frequency range than in the NAL-NL1 formula.  The study of objective performance and subjective preference with hearing aids is constantly evolving and the NAL-NL2 prescriptive method may be a step toward achieving increased acceptance by existing hearing aid users and improved spontaneous acceptance by new hearing aid users.

References

American National Standards Institute (1997). Methods for calculation of the speech intelligibility index. ANSI S3.5-1997. Acoustical Society of America, New York.

Ching, T., Dillon, H. & Byrne, D. (1998). Speech recognition of hearing-impaired listeners: Predictions for audibility and the limited role of high frequency amplification. Journal of the Acoustical Society of America 103 (2), 1128-1140.

Dillon, H. (1999). Page Ten: NAL-NL1: A new procedure for fitting non-linear hearing aids. Hearing Journal 52, 10-16.

Dillon, H., Keidser, G., Ching, T. & Flax, (2010). The NAL-NL2 Prescription Formula, conference paper, Audiology Australia 19th National Conference, May 2010.

Epstein, M. & Florentine, M. (2009). Binaural loudness summation for speech and tones presented via earphones and loudspeakers. Ear and Hearing 30(2), 234-237.

Keidser, G. & Dillon, H. (2006). What’s new in prescriptive fittings down under? In: Palmer, C.V., Seewald, R. (Eds.), Hearing Care for Adults 2006. Phonak AG, Stafa, Switzerland, pp. 133-142.

Keidser, G., Dillon, H., Dyrlund, O., Carter, L. & Hartley, D. (2007). Preferred low and high frequency compression ratios among hearing aid users with moderately severe to profound hearing loss. Journal of the American Academy of Audiology 18(1), 17-33.

Keidser, G., O’Brien, A., Carter, L., McLelland, M. & Yeend, I. (2008). Variation in preferred gain with experience for hearing aid users. International Journal of Audiology 47(10), 621-635.

Keidser, G., Dillon, H., Flax, M., Ching, T. & Brewer, S. (2011). The NAL-NL2 prescription procedure. Audiology Research 1 (e24), 88-90.

Moore, B.C.J. & Glasberg, B. (1997). A model of loudness perception applied to cochlear hearing loss. Audiotory Neuroscience 3, 289-311.

Moore, B.C.J. & Glasberg, B. (2004). A revised model of loudness perception applied to cochlear hearing loss. Hearing Research 188, 70-88.

Smeds, K., Keidser, G., Zakis, J., Dillon, H., Leijon, A., Grant, F., Convery, E. & Brew, C. (2006). Preferred overall loudness. II: Listening through hearing aids in field and laboratory tests. International Journal of Audiology 45(1), 12-25.

Zakis, J., Dillon, H. & McDermott, H.J. (2007). The design and evaluation of a hearing aid with trainable amplification parameters. Ear and Hearing 28(6), 812-830.

What motivates hearing aid use?

Jenstad, L. & Moon, J. (2011). Systematic review of barriers and facilitators to hearing aid uptake in older adults. Audiology Research 1:e25, 91-96.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Though some causes of adult-onset hearing loss are treated medically or surgically, hearing aid use is by far the most common treatment. Yet 25% of adults who could benefit from hearing instruments actually wear them (Kochkin, 2000; Meister et al., 2008). A number of studies have examined the factors that prevent individuals from purchasing hearing aids and Jenstad and Moon’s objective was to systematically review the literature to identify the main barriers to hearing aid uptake in older adults.

They included subjective and objective reports, but limited this investigation to studies with more than 50 subjects over the age of 65, who had never used hearing aids, had at least mild to moderate sensorineural hearing loss and were in good general health. From an initial set of 388 abstracts, they eliminated studies about children, cochlear implants, medical aspects of hearing loss, auditory processing or hearing aid outcomes.  From the remaining 50 articles, the report focused on 14 papers that met the inclusion criteria. Hearing aid uptake was defined as a hearing aid purchase, but some studies included willingness to purchase.  Based on the literature review, Jenstad and Moon identified a set of predictors of hearing aid uptake in older adults. Some of the predictors they described may be helpful discussion points for clinicians counseling potential hearing aid users.

Self-reported hearing loss was evaluated in questionnaires and hearing-handicap indices that examined hearing-related quality of life as well as activity and participation limitation (Chang et al., 2009; Helvik et al., 2008; Garstecki & Erler, 1998; Meister et al., 2008; Palmer, 2009).  Not surprisingly, as self-reported hearing loss increased, study participants were more likely or willing to obtain hearing aids.  In other words, the more aware individuals were of their hearing-related difficulties, the more likely they were to purchase hearing aids. With this in mind, clinicians should instruct unmotivated hearing aid candidates to pay close attention to their hearing-related difficulties while determining need for amplification. Work together to identify activities that patients must do (e.g., work) or enjoy doing (e.g., dining out, going to the theater). Using this information to understand the extent to which hearing loss disrupts their communication ability in these situations will enlighten the patient to the extent of their own hearing handicap and point towards opportunities for treatment and counseling.

Stigma was predictive of hearing aid acceptance in some studies (Franks & Beckmann, 1985; Garstecki & Erler, 1998; Kochkin, 2007; Meister et al., 2008; Wallhagen, 2010), but overall was inconsistent in its effect on hearing aid uptake. In 1985, Franks & Beckmann found that stigma was the highest concern among their subjects, whereas in 2008, Meister and his associates found that stigma only accounted for 8% of the variability in hearing aid uptake. The negative stigma associated with hearing aids is assumed to relate to the appearance of the aid and the perception of hearing loss by other people. Therefore, hearing aid users with high concern desire small, discreet instruments. Improvements in technology allow for smaller, sleeker designs that make the hearing aid—and hearing loss—less noticeable. Therefore, hearing aid users no longer have to make an obvious acknowledgement of their hearing impairment.

Degree of hearing loss was a significant factor in the decision to obtain hearing aids, but the effect seems to be modified by gender. Garstecki & Erler (1998) found that degree of hearing loss was more likely to affect hearing aid uptake for females than males, but this finding was not reported in other studies. In general, as degree of hearing loss worsens people are more willing to wear hearing aids. Detailed discussion of audiometric findings, with visual references to speech and environmental sound levels helps to familiarize the patient and their family with degrees of hearing loss and the impact on speech perception.  Using tools like hearing loss simulators offer a convenient tool for educationing and motivating patients toward the acceptance of hearing aids.

Personality and psychological factors affected hearing aid uptake in three studies (Cox et al., 2005; Garstecki & Erler, 1998; Helvik et al., 2008). Cox and her colleagues found that hearing aid “seekers” were less neurotic, less open and more agreeable than those who did not seek hearing aids.  Internal locus of control predicted hearing aid acceptance in Cox’s study, but Garstecki and Erler found that it was only predictive for female subjects. Though locus of control is one among many factors influencing the decision, the choice to obtain hearing aids should be presented as a way to assume control of the hearing impairment and make proactive steps toward improving communication abilities.

Helvik (2008) found that subjects who reported using fewer maladaptive coping strategies, such as dominating conversations or avoid social situations, were less likely to accept hearing aids. Many hearing-impaired individuals use poor coping strategies without realizing it. It seems counterintuitive that reported use of maladaptive strategies would be inversely related to hearing aid acceptance, but the authors surmised that the study participants who rejected hearing aids may have been in denial about both the hearing loss and their use of poor communication strategies. Hearing impaired individuals may not be aware of the extent of their communication difficulties and may not realize how often they are misunderstanding conversation or requiring others to make extra efforts. Including family members in the discussion of hearing aid candidacy is critical, to make the hearing-impaired individual aware of how their loss affects others and how the use of poor or ineffective strategies may result in frustration for themselves and other conversational participants.

Cost was a barrier to hearing aid use in some studies but was not a significant factor in others (Meister et al., 2008). But Jenstad and Moon point out that cost may affect hearing aid acceptance in more than one way. Pointing out that Kochkin’s 2007 survey found that 64% of respondents reported that they could not afford hearing aids, whereas 45% of respondents said that hearing aids are not worth the expense. There are ways in which clinicians can address both of these issues with hearing aid candidates. First, improvements in technology have made quality instruments available at a wide range of prices. Most manufacturers offer a broad product line, with entry-level instruments in custom and BTE styles. Clients should be assured that their hearing loss, lifestyle and listening needs will determine a range of options to choose from. Lower-cost hearing aids might require more manual adjustment than aids with sophisticated automatic features, but with proper training and programming some lower cost options might work quite well. Additionally, unbundled pricing and financing options may help potential hearing aid users afford the purchase price. Together, these strategies make cost less of a barrier for many potential hearing aid candidates.

Kochkin’s finding that 45% of respondents felt hearing aids were not worth the expense is perhaps more difficult to address. Some of the bias against hearing aids is related to inappropriate hearing aid selection or inadequate training and follow-up care. Most clinicians have encountered clients with a friend or neighbor who doesn’t like their hearing aids. Negative experiences with hearing aids may be more likely related to selection, programming and follow-up care than the quality of the hearing instruments themselves. The finest hearing aid available will be rejected if it is inappropriate for the user’s hearing loss or lifestyle or is programmed improperly. Unfortunately, many people who have an unsuccessful experience have acquired their hearing aids through non-clinical channels. These people often blame their dissatisfaction on the quality of the hearing aid, contributing to a larger general perception that hearing aids are not worth the price. Clinicians must emphasize the importance of the care that they provide. Thorough verification, validation and follow-up care by well-trained, credentialed clinical specialists will affect patient’s perception and lead them toward success.

The effect of age on hearing aid uptake was unclear in Jenstad and Moon’s review. One study showed a slight increase in hearing aid uptake with increasing age (Helvik et al, 2008), whereas another showed a stronger increase with age (Hidalgo, 2009). In contrast, Uchida et al. (2008) found that hearing aid uptake decreased with increasing age. The effect of age, if any, on hearing aid acceptance will be confounded by other variables such as degree of loss, lifestyle, general health and financial constraints. Therefore, age should be a minor consideration with reference to hearing aid candidacy but remains highly relevant when discussing specific options such as manual controls, automatic features and hearing aid styles.

Gender affected the predictive value of several factors including stigma, degree of loss and locus of control. Hidalgo (2009) found that in general males were more likely to report a need for hearing aids than were females. Gender in itself might not be a strong predictor, so it probably should not be specifically considered in discussions with potential hearing aid users as other variables appear to have more impact on the decision to pursue hearing aids.

Franks and Beckmann reported that individuals who chose not to purchase hearing aids were more likely to report that hearing aids were inconvenient to wear. Though the study was done in 1985, their findings merit consideration today.  Since then, hearing aids have become smaller, more effective and less troublesome because of advances like feedback cancellation, directionality and noise reduction. However, the fact remains that hearing aids must be worn, cleaned and cared for daily and in most cases batteries must be changed on a weekly basis.  Use and care guidelines should be balanced by discussion of the likely benefits of hearing aid use and the positive effect they have on communication in everyday situations.  With the technological sophistication that today’s hearing aids offer the known benefits should outweigh any perceived inconvenience.

Jenstad and Moon have clarified some of the primary barriers to hearing aid uptake, providing useful information for clinicians working with hearing aid candidates. The predictors they discussed can be addressed systematically to quell concerns about and underscore the need for hearing instruments. Discussing these issues at the outset may encourage motivated clients to proceed with a hearing aid purchase and provide helpful considerations for those who are not yet ready to pursue amplification. With many potential places to purchase and limited information to guide patients toward qualified hearing care professionals, internet sales offer the appealing promise of quality hearing instruments at lower costs than may be found in a clinic.  But consumers must be educated that a key to successful hearing aid use is the support of the professional, not the quality of the device itself.  Anyone can “sell” a quality hearing aid but only a trained professional can make appropriate clinical decisions and recommendations.

References

Chang, H.P., Ho, C.Y. & Chou, P. (2009). The factors associated with a self-perceived hearing handicap in elderly people with hearing impairment – results from a community-based study. Ear and Hearing 30(5), 576-583.

Cox, R.M., Alexander, G.C. & Gray, G.A. (2005). Who wants a hearing aid? Personality profiles of hearing aid seekers. Ear and Hearing 26(1), 12-26.

Franks, J.R. & Beckmann, N.J. (1985). Rejection of hearing aids: attitudes of a geriatric sample. Ear and Hearing 6(3), 161-166.

Garstecki, D.C. & Erler, S. F. (1998). Hearing loss, control and demographic factors influencing hearing aid use among older adults. Journal of Speech, Language and Hearing Research 41(3), 527-537.

Helvik, A.S., Wennberg, S., Jacobsen, G. & Hallberg, L.R. (2008). Why do some individuals with objectively verified hearing loss reject hearing aids? Audiological Medicine 6(2), 141-148.

Hidalgo, J.L., Gras, C.B., Lapeira, J.T., Verdejo, M.A., del Campo, D.C. & Rabadan, F.E. (2009). Functional status of elderly people with hearing loss. Archives of Gerontology and Geriatrics 49(1), 88-92.

Jenstad, L. & Moon, J. (2011). Systematic review of barriers and facilitators to hearing aid uptake in older adults. Audiology Research 1:e25, 91-96.

Kochkin, S. (2000). MarkeTrak V: “Why my hearing aids are in the drawer”: the consumer’s perspective. Hearing Journal 53(2), 34-41.

Meister, H., Walger, M., Brehmer, D., von Wedel, U. & von Wedel, J. (2008). The relationship between pre-fitting expectations and willingness to use hearing aids. International Journal of Audiology 47(4), 153-159.

Palmer, C.V., Solodar, H.S., Hurley, W.R., Byrne, D.C. & Williams, K.O. (2009). Self-perception of hearing ability as a strong predictor of hearing aid purchase. Journal of the American Academy of Audiology 20(6), 341-347.

Do hearing aid wearers benefit from visual cues?

Wu, Y-H. & Bentler, R.A. (2010) Impact of visual cues on directional benefit and preference: Part I – laboratory tests. Ear and Hearing 31(1), 22-34.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors. 

The benefits of directional microphone use have been consistently supported by experimental data in the laboratory (Valente et al. 1995; Ricketts & Hornsby 2006; Gravel et al. 1999; Kuk et al. 1999). Similarly, hearing aid users have indicated a preference for directional microphones over omnidirectional processing in noise in controlled environments (Preves et al. 1999; Walden et al. 2005; Amlani et al. 2006). Despite the robust directional benefit reported in laboratory studies, field studies have yielded less impressive results; with some studies reporting perceived benefit (Preves et al. 1999; Ricketts et al. 2003) while others have not (Walden et al. 2000; Cord et al. 2002, 2004; Palmer et al. 2006).

One factor that could account for reduced directional benefit reported in field studies is the availability of visual cues. It is well established that visual cues, including lip-reading (Sumby & Pollack 1954) as well as eyebrow (Bernstein et al. 1989) and head movements (Munhall et al. 2004), can improve speech recognition ability in the presence of noise. In field studies, the availability of visual cues could result in a decreased directional benefit due to ceiling effects. In other words, the benefit of audio-visual (AV) speech cues might result in omnidirectional performance so close to a listener’s maximum ability that directionality may offer only limited additional improvement.  This could reduce both measured and perceived directional benefits.  It follows that ceiling effects from the availability of AV speech cues could also reduce the ability of auditory-only (AO) laboratory findings to accurately predict real-world performance.

Few studies have investigated the effect of visual cues on hearing aid performance or directional benefit. Wu and Bentler’s goal in the current study was to determine if visual cues could partially account for the discrepancy between laboratory and field studies of directional benefit. They outlined three experimental hypotheses:

1. Listeners would obtain less directional benefit and would prefer directional over omnidirectional microphone modes less frequently in auditory-visual (AV) conditions than in auditory-only (AO) conditions.

2. The AV directional benefit would not be predicted by the AO directional benefit.

3. Listeners with greater lip-reading skills would obtain less AV directional benefit than would listeners with lesser lip-reading skills.

Twenty-four adults with hearing loss participated in the study. Participants were between the ages of 20-79 years, had bilaterally symmetrical, downward-sloping, sensorineural hearing losses, normal or corrected normal vision and were native English speakers. Participants were fitted with bilateral, digital, in-the-ear hearing instruments with manually-accessible omnidirectional and directional microphone modes.

Directional benefit was assessed with two speech recognition measures:  the AV version of the Connected Speech Test (CST; Cox et al., 1987) and the Hearing in Noise Test (HINT; Nilsson et al., 1994). For the AV CST the talker was displayed on a 17” monitor. Participants listened to sets of CST sentences again in a second session to evaluate subjective preference for directional versus omnidirectional microphone modes. Speech stimuli were presented in six signal-to-noise (SNR) conditions ranging from -10dB to +10dB in 4db steps. Lip-reading ability was assessed with the Utley test (Utley, 1946), an inventory of 31 sentences recited without sound or facial exaggeration.

Analysis of the CST scores yielded significant main effects for SNR, microphone mode and presentation mode (AV vs. AO) as well as significant interactions among the variables. The benefit for visual cues was greater than the benefit afforded by directionality. As the authors expected, for most SNRs the directional benefit was smaller for AV conditions than AO conditions with the exception of the poorest SNR condition, -10dB.  Scores for all conditions (AV-DIR, AV-OMNI, AO-DIR, AO-OMNI) plateau at ceiling levels for the most favorable SNRs; meaning that both AV benefit and directional benefit decreased as SNR improved to +10dB.  HINT scores, which did not take into account ceiling effects, yielded a significant mean directional benefit of 3.9dB.

Participants preferred the directional microphone mode in the AO condition, especially at SNRs between -6dB to +2dB. At more favorable SNRs, there was essentially no preference. In the AV condition, participants were less likely to prefer the directional mode, except at the poorest SNR, -10dB. Further analysis revealed that the odds of preferring directional mode in AO condition were 1.37 times higher than in the AV condition. In other words, adding visual cues reduced overall preference for the directional microphone mode.

At intermediate and favorable SNRs there was no significant correlation between AV directional benefit and the Utley lip-reading scores. For unfavorable SNRs, the negative correlation between these variables was significant, indicating that in the most difficult listening conditions, listeners with better lip-reading skills obtained less AV directional benefit than those participants who were less adept at lip-reading.

The outcomes of these experiments generally support the authors’ hypotheses. Visual cues significantly improved speech recognition scores in omnidirectional trials close to ceiling levels, reducing directional benefit and subjective preference for directional microphone modes.  Auditory-only (AO) performance, typical of laboratory testing, was not predictive of auditory-visual (AV) performance. This is in agreement with prior indications that AO directional benefit as measured in laboratory conditions doesn’t match real-world directional benefit and suggests that the availability of visual cues can at least partially explain the discrepancy.  The authors suggested that directional benefit should theoretically allow a listener to rely less on visual cues. However, face-to-face conversation is natural and hearing-impaired listeners should leverage avoid visual cues when they are available.

The results of Wu and Bentler’s study suggest that directional microphones may provide only limited additional benefit when visual cues are available, for all but the most difficult listening environments. In the poorest SNRs, directional microphones may be leverages for greater benefit.  Still, the authors point out that mean speech recognition scores were best when both directionality and visual cues were available. It follows that directional microphones should be recommended for use in the presence of competing noise, especially in high noise conditions. Even if speech recognition ability is not significantly improved with the use of directional microphones in many typical SNRs, there may be other subjective benefits to directionality, such as reduced listening effort, distraction or annoyance that listeners respond favorably to.

It is important for clinicians to prepare new users of directional microphones to have realistic expectations. Clients should be advised that directionality can reduce competing noise but not eliminate it. Hearing aid users should be encouraged to consider their positioning relative to competing noise sources and always face the speech source that they wish to attend to.  Although visual cues appear to offer greater benefits to speech recognition than directional microphones alone; the availability of visual speech cues may be compromised by poor lighting, glare, crowded conditions or visual disabilities, making directional microphones all the more important for many everyday situations. Thus all efforts should be made to maximize directionality and the availability of visual cues in day-to-day situations, as both offer potential real-world benefits.

References

Amlani, A.M., Rakerd, B. & Punch, J.L. (2006). Speech-clarity judgments of hearing aid processed speech in noise: differing polar patterns and acoustic environments. International Journal of Audiology 12, 202-214.

Bernstein, L.E., Eberhardt, S.P. & Demorest, M.E. (1989). Single-channel vibrotactile supplements to visual perception of intonation and stress. Journal of the Acoustical Society of America 85, 397-405.

Cord, M.T., Surr, R.K., Walden, B.E., et al. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology 13, 295-307.

Cord, M.T., Surr, R.K., Walden, B.E., et al. (2004). Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids. Journal of the American Academy of Audiology 15, 353-364.

Cox, R.M., Alexander, G.C. & Gilmore, C. (1987). Development of the Connected Speech Test (CST). Ear and Hearing 8, 119S-126S.

Gravel, J.S., Fausel, N., Liskow, C., et al. (1999). Children’s speech recognition in noise using omnidirectional and dual-microphone hearing aid technology. Ear and Hearing 20, 1-11.

Kuk, F., Kollofski, C., Brown, S., et al. (1999). Use of a digital hearing aid with directional microphones in school-aged children. Journal of the American Academy of Audiology 10, 535-548.

Lee L., Lau, C. & Sullivan, D. (1998). The advantage of a low compression threshold in directional microphones. Hearing Review 5, 30-32.

Leeuw, A.R. & Dreschler, W.A. (1991). Advantages of directional hearing aid microphones related to room acoustics. Audiology 30, 330-344.

Munhall, K.G., Jones, J.A., Callan, D.E., et al. (2004). Visual prosody and speech intelligibility: head movement improves auditory speech perception. Psychological Science 15, 133-137.

Nilsson, M., Soli, S. & Sullivan, J.A. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95, 1085-1099.

Palmer, C., Bentler, R., & Mueller, H.G. (2006). Evaluation of a second order directional microphone hearing aid: Part II – Self-report outcomes. Journal of the American Academy of Audiology 17, 190-201.

Preves, D.A., Sammeth, C.A. & Wynne, M.K. (1999). Field trial evaluations of a switched directional/omnidirectional in-the-ear hearing instrument.  Journal of the American Academy of Audiology 10, 273-284.

Ricketts, T. (2000b). Impact of noise source configuration on directional hearing aid benefit and performance. Ear and Hearing 21, 194-205.

Ricketts, T. & Hornsby, B.W. (2003). Distance and reverberation effects on directional benefit. Ear and Hearing 24, 472-484.

Ricketts, T. & Hornsby, B.W. (2006). Directional hearing aid benefit in listeners with severe hearing loss. International Journal of Audiology 45, 190-197.

Ricketts, T., Henry, P. & Gnewikow, D. (2003). Full time directional versus user selectable microphone modes in hearing aids. Ear and Hearing 24, 424-439.

Sumby, W.H. & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26, 212-215.

Valente, M., Fabry, D.A. & Potts, L.G. (1995). Recognition of speech in noise with hearing aids using dual microphones. Journal of the American Academy of Audiology 6, 440-449.

Walden, B.E., Surr, R.K., Cord, M.T., et al. (2000). Comparison of benefits provided by different hearing aid technologies. Journal of the American Academy of Audiology 11, 540-560.

Walden, B.E., Surr, R.K . Grant, K.W., et al. (2005). Effect of signal-to-noise ratio on directional microphone benefit and preference. Journal of the American Academy of Audiology 16, 662-676.

Wu, Y-H. & Bentler, R.A. (2010) Impact of visual cues on directional benefit and preference: Part I – laboratory tests. Ear and Hearing 31(1), 22-34.

Differences Between Directional Benefit in the Lab and Real-World

Relationship Between Laboratory Measures of Directional Advantage and Everyday Success with Directional Microphone Hearing Aids

Cord, M., Surr, R., Walden, B. & Dyrlund, O. (2004). Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids.Journal of the American Academy of Audiology 15, 353-364.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

People with hearing loss require a better signal-to-noise ratio (SNR) than individuals with normal hearing (Dubno et al, 1984; Gelfand et al, 1988; Bronkhorst and Plomp, 1990).  Among many technological improvements, a directional microphone is arguably the only effective hearing aid feature for improving SNR and subsequently, improving speech understanding in noise. A wide range of studies support the benefit of directionality for speech perception in competing noise (Agnew & Block, 1997; Nilsson et al, 1994; Ricketts and Henry, 2002; Valente, 1995) Directional benefit is defined as the difference in speech recognition ability between omnidirectional and directional microphone modes. In laboratory conditions, directional benefit averages around 7-8dB but varies considerably and has ranged from 2-3dB up to 14-16dB (Valente et al, 1995; Agnew & Block, 1997).

An individual’s perception of directional benefit varies considerably among hearing aid users. Cord et al (2002) interviewed individuals who wore hearing aids with switchable directional microphones and 23% reported that they did not use the directional feature. Many respondents said they had initially tried the directional mode but did not notice adequate improvement in their ability to understand speech and therefore stopped using the directional mode. This discrepancy between measured and perceived benefit has prompted exploration of the variables that affect performance with directional hearing aids. Under laboratory conditions, Ricketts and Mueller (2000) examined the effect of audiometric configuration, degree of high frequency hearing loss and aided omnidirectional performance on directional benefit, but found no significant interactions among any of these variables.

The current study by Cord and her colleagues examined the relationship between measured directional advantage in the laboratory and success with directional microphones in everyday life. The authors studied a number of demographic and audiological variables, including audiometric configuration, unaided SRT, hours of daily hearing aid use and length of experience with current hearing aids, in an effort to determine their value for predicting everyday success with directional microphones.

Twenty hearing-impaired individuals were selected to participate in one of two subject groups. The “successful” group consisted of individuals who reported regular use of omnidirectional and directional microphone modes. The “unsuccessful” group of individuals reported not using their directional mode and using their omnidirectional mode all the time. Analysis of audiological and demographic information showed that the only significant differences in audiometric threshold between the successful and unsuccessful group were at 6-8 kHz, otherwise the two groups had very similar audiometric configurations, on average. There were no significant differences between the two groups for age, unaided SRT, unaided word recognition scores, hours of daily use or length of experience with hearing aids.

Subjects were fitted with a variety of styles – some BTE and some custom – but all had manually accessible omnidirectional and directional settings. The Hearing in Noise Test (HINT; Nilsson et al, 1994) was administered to subjects with their hearing aids in directional and omnidirectional modes. Sentence stimuli were presented in front of the subject and correlated competing noise was presented through three speakers: directly behind the subject and on each side. Following the HINT participants completed the Listening Situations Survey (LSS), a questionnaire developed specifically for this study. The LSS was designed to assess how likely participants were to encounter disruptive background noise in everyday situations, to determine if unsuccessful and successful directional microphone users were equally likely to encounter noisy situations in everyday life.  The survey consisted of four questions:

1) On average, how often are you in listening situations in which bothersome background noise is present?

2) How often are you in social situations in which at least 3 other people are present?

3) How often are you in meetings (e.g. community, religious, work, classroom, etc.)?

4) How often are you talking with someone in a restaurant or dining hall setting?

The HINT results suggest average directional benefit of 3.2dB for successful users and 2.1dB for unsuccessful users. Although directional benefit was slightly greater for the successful users, the difference between the groups was not statistically significant.  There was a broad range of directional benefit for both groups: from -0.8 to 6.0dB for successful users and from -3.4 to 10.5dB for the unsuccessful users. Interestingly, three of the ten successful users obtained little or no directional benefit, whereas seven of the ten unsuccessful users obtained positive directional benefit.

Analysis of the LSS results showed that successful users of directional microphones were somewhat more likely than unsuccessful users to encounter listening situations with bothersome background noise and to encounter social situations with more than three other people present. However, statistical analysis showed no significant differences between the two groups for any items on the LSS survey, indicating that users who perceived directional benefit and used their directional microphones were not significantly more likely to encounter noisy situations in everyday life.

These observations led the authors to conclude that directional benefit as measured in the laboratory did not predict success with directional microphones in everyday life. Some participants with positive directional advantage scores were unsuccessful directional microphone users and conversely, some successful users showed little or no directional advantage. There are a number of potential explanations for their findings. First, despite the LSS results, it is possible that unsuccessful users did not encounter real-life listening situations in which directional microphones would be likely to help. Directional microphone benefit is dependent on specific characteristics of the listening environment (Cord et al, 2002; Surr et al, 2002; Walden et al, 2004), and is most likely to help when the speech source is in front of and relatively close to the listener, with spatial separation between the speech and noise sources. Individuals who rarely encounter this specific listening situation would have limited opportunity to evaluate directional microphones and may therefore perceive only limited benefit from them.

Unsuccessful directional microphone users may have also had unrealistically high expectations about directional benefits. Directionality can be a subtle but effective way of improving speech understanding in noise. Reduction of sound from the back and sides helps the listener focus attention on the speaker and ignore competing noise. Directional benefit is based on the concept of face-to-face communication, if users expect their hearing aids to reduce all background noise from all angles they are likely to be disappointed. Similarly, if they expect the aids to completely eliminate background noise, rather than slightly reduce it, they will be unimpressed. It is helpful for hearing aid users, especially those new to directional microphones, to be counseled about realistic expectations as well as proper positioning in noisy environments. If listeners know what to expect and are able to position themselves for maximum directional effect they are more likely to perceive benefit from their hearing aids in noisy conditions.

To date, it has been difficult to correlate directional benefit under laboratory conditions with perceived directional benefit. It is clear that directionality offers performance benefits in noise, but directional benefit measured in a sound booth does not seem to predict everyday success with directional microphones. There are many factors that are likely affect real-life performance with directional microphone hearing aids, including audiometric variables, the frequency response and gain equalization of the directional mode, the venting of the hearing aid and the contribution of visual cues to speech understanding (Ricketts, 2000a; 2000b). Further investigation is still needed to elucidate the impact of these variables on the everyday experiences of hearing aid users.

As is true for all hearing aid features, directional microphones must be prescribed appropriately and hearing aid users should be counseled about realistic expectations and appropriate circumstances in which they are beneficial. Although most modern hearing instruments have the ability to adjust automatically to changing environments, manually accessed directional modes offer hearing aid wearers increased flexibility and may increase use by allowing the individual to make decisions regarding their improved comfort and performance in noisy places. Routine reinforcement of techniques for proper directional microphone use are encouraged. Hearing aid users should be encouraged to experiment with their directional programs to determine where and when they are most helpful. For the patient, proper identification of and positioning in noisy environments is essential step toward meeting their specific listening needs and preferences.

References

Agnew, J. & Block, M. (1997). HINT thresholds for a dual-microphone BTE. Hearing Review 4, 26-30.

Bronkhorst, A. & Plomp, R. (1990). A clinical test for the assessment of binaural speech perception in noise. Audiology 29, 275-285.

Cord, M.T., Surr, R.K., Walden, B.E. & Olson, L. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology 13, 295-307.

Cord, M., Surr, R., Walden, B. & Dyrlund, O. (2004). Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids. Journal of the American Academy of Audiology 15, 353-364.

Dubno, J.R., Dirks, D.D. & Morgan, D.E. (1984).  Effects of age and mild hearing loss on speech recognition in noise. Journal of the Acoustical Society of America 76, 87-96.

Gelfand, S.A., Ross, L. & Miller, S. (1988). Sentence reception in noise from one versus two sources: effects of aging and hearing loss. Journal of the Acoustical Society of America 83, 248-256.

Kochkin, S. (1993). MarkeTrak III identifies key factors in determining customer satisfaction. Hearing Journal 46, 39-44.

Nilsson, M., Soli, S.D. & Sullivan, J.A. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95, 1085-1099.

Ricketts, T. (2000a). Directivity quantification in hearing aids: fitting and measurement effects. Ear and Hearing 21, 44-58.

Ricketts, T. (2000b). Impact of noise source configuration on directional hearing aid benefit and performance. Ear and Hearing 21, 194-205.

Ricketts, T. (2001). Directional hearing aids. Trends in Amplification 5, 139-175.

Ricketts, T.  & Henry, P. (2002). Evaluation of an adaptive, directional microphone hearing aid. International Journal of Audiology 41, 100-112.

Ricketts, T. & Henry, P. (2003). Low-frequency gain compensation in directional hearing aids. American Journal of Audiology 11, 1-13.

Ricketts, T. & Mueller, H.G. (2000). Predicting directional hearing aid benefit for individual listeners. Journal the American Academy of Audiology 11, 561-569.

Surr, R.K., Walden, B.E. Cord, M.T. & Olson, L. (2002). Influence of environmental factors on hearing aid microphone preference. Journal of the American Academy of Audiology 13, 308-322.

Valente, M., Fabry, D.A. & Potts, L.G. (1995). Recognition of speech in noise with hearing aids using dual microphones. Journal of the American Academy of Audiology 6, 440-449.

Walden, B.E., Surr, R.K., Cord, M.T. & Dyrlund, O. (2004). Predicting microphone preference in everyday living. Journal of the American Academy of Audiology 15, 365-396.

Are you prescribing an appropriate MPO?

Effect of MPO and Noise Reduction on Speech Recognition in Noise

Kuk, F., Peeters, H., Korhonen, P. & Lau, C. (2010). Effect of MPO and noise reduction on speech recognition in noise. Journal of the American Academy of Audiology, submitted November 2010.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the original authors.

A component of clinical best practice would suggest that clinicians determine a patient’s uncomfortable listening levels in order to prescribe the output limiting characteristics of a hearing aid (Hawkins et al., 1987). The optimal maximum power output (MPO) should be based on two goals: preventing loudness discomfort and avoiding distorted sound quality at high input levels. The upper limit of a prescribed MPO must allow comfortable listening; less consideration is given to the consequences that under prescribing MPO might have on hearing aid and patient performance.

There are two primary concerns related to the acceptable lower MPO limit: saturation and insufficient loudness. Saturation occurs when the input level of a stimulus plus gains applied by the hearing aid exceed the MPO, causing distortion and temporal smearing (Dillon & Storey, 1998). This results in a degradation of speech cues and a perceived lack of clarity, particularly in the presence of competing noise. Similarly, insufficient loudness reduces the availability of speech cues. There are numerous reports of subjective degradation of sound when MPO is set lower than prescribed levels, particularly in linear hearing instruments (Kuk et al., 2008; Storey et al., 1998; Preminger, et al., 2001). There is not yet consensus on whether low MPO levels also cause objective degradation in performance.

The purpose of the study described here was to determine if sub-optimal MPO could affect speech intelligibility in the presence of noise, even in a multi-channel, nonlinear hearing aid. Furthermore, the authors examined if gain reductions from a noise reduction algorithm could mitigate the detrimental effects of the lower MPO. The authors reasoned that a reduction in output at higher input levels, via compression and noise reduction, could reduce saturation and temporal distortion.

Eleven adults with flat, severe hearing losses participated in the reviewed study. Subjects were fitted bilaterally with 15-channel, wide dynamic range compression, behind-the-ear hearing aids. Microphones were set to omnidirectional and other than noise reduction, no special features were activated during the study. Subjects responded to stimuli from the Hearing in Noise Test (HINT, Nilsson et al., 1994) presented at a 0-degree azimuth angle in the presence of continuous speech-shaped noise. The HINT stimuli yielded reception thresholds for speech (RTS) scores for each test condition.

Test conditions included two MPO prescriptions: the default MPO level (Pascoe, 1989) and 10dB below that level. The lower setting was chosen based on previous work that reported an approximately 18dB acceptable MPO range for listeners with severe hearing loss  (Storey et al., 1998). MPOs set at 10dB below default would therefore be likely to approach the low end of the acceptable range, resulting in perceptual consequences. Speech-shaped noise was presented at two levels: 68dB and 75dB. Testing was done with and without digital noise reduction (DNR).

Analysis of the HINT RTS scores yielded significant main effects of MPO and DNR, as well as significant interactions between MPO and DNR, and DNR and noise level. There was no significant difference between the two noise level conditions. Subjects performed better with the default MPO setting versus the reduced MPO setting. The interaction between the MPO and DNR showed that subjects’ performance in the low-MPO condition was less degraded when DNR was activated. These findings support the authors’ hypotheses that reduced MPO can adversely affect speech discrimination and that noise reduction processing can at least partially mitigate these adverse effects.

Prescriptive formulae have proven to be reasonably good predictors of acceptable MPO levels (Storey et al., 1988; Preminger et al., 2001). In contrast, there is some question as to the value of clinical UCL testing prior to fitting, especially when validation with loudness measures is performed after the fitting (Mackersie, 2006). Improper instruction for the UCL task may yield inappropriately low UCL estimates, resulting in compromised performance and sound quality. The authors of the current paper recommend following prescriptive targets for MPO and conducting verification measures after the fitting, such as real-ear saturation response (RESR) and subjective loudness judgments.

Another scenario, and an ultimately avoidable one, involves individuals who have been fitted with inappropriate instruments for their loss, usually because of cosmetic concerns. It is unfortunately not so unusual for some individuals with severe hearing losses to be fitted with RIC or CIC instruments because of their desirable cosmetic characteristics. Smaller receivers will likely have MPOs that are too low for hearing aid users with severe hearing loss. Many hearing-aid users may not realize they are giving anything up when they select a CIC or RIC and may view these styles as equally appropriate options for their loss. The hearing aid selection process must therefore be guided by the clinician; clients should be educated about the benefits and limitations of various hearing aid options and counseled about the adverse effects of under-fitting their loss with a more cosmetically appealing option.

The results of the current study are important because they illuminate an issue related to hearing aid output that might not always be taken into clinical consideration. MPO settings are usually thought of as a way to prevent loudness discomfort, so the concern is to avoid setting the MPO too high. Kuk and his colleagues have shown that an MPO that is too low could also have adverse effects and have provided valuable information to help clinicians select appropriate MPO settings. Additionally, their findings show objective benefits and support the use of noise reduction strategies, particularly for individuals with reduced dynamic range due to severe hearing loss or tolerance issues. Of course their findings may not be generalizable to all multi-channel compression instruments, with the wide variety of compression characteristics that are available, but they present important considerations that should be examined in further detail with other instruments.

References

ANSI (1997). ANSI S3.5-1997. American National Standards methods for the calculation of the speech intelligibility index. American National Standards Institute, New York.

Dillon, H. & Storey, L. (1998). The National Acoustic Laboratories’ procedure for selecting the saturation sound pressure level of hearing aids: theoretical derivation. Ear and Hearing 19(4), 255-266.

Hawkins, D., Walden, B., Montgomery, A. & Prosek, R. (1987). Description and validation of an LDL procedure designed to select SSPL90. Ear and Hearing 8, 162-169.

Kuk , F., Korhonen, P., Baekgaard, L. & Jessen, A. (2008). MPO: A forgotten parameter in hearing aid fitting. Hearing Review 15(6), 34-40.

Kuk et al., (2010). Effect of MPO and noise reduction on speech recognition in noise. Journal of the American Academy of Audiology, submitted November 2010, fast track article.

Kuk, F. & Paludan-Muller, C. (2006). Noise management algorithm may improve speech intelligibility in noise. Hearing Journal 59(4), 62-65.

Mackersie, C. (2006). Hearing aid maximum output and loudness discomfort: are unaided loudness measures needed? Journal of the American Academy of Audiology 18 (6), 504-514.

Nilsson, M., Soli, S. & Sullivan, J. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95(2), 1085-1099.

Pascoe, D. (1989). Clinical measurements of the auditory dynamic range and their relation to formulae for hearing aid gain. In J. Jensen (Ed.), Hearing Aid Fitting: Theoretical and Practical Views. Proceedings of the 13th Danavox Symposium. Copenhagen: Danavox, pp. 129-152.

Preminger, J., Neuman, A. & Cunningham, D. (2001). The selection and validation of output sound pressure level in multichannel hearing aids. Ear and Hearing 22(6), 487-500.

Storey, L., Dillon, H., Yeend, I. & Wigney, D. (1998). The National Acoustic Laboratories, procedure for selecting the saturation sound pressure level of hearing aids: experimental validation. Ear and Hearing 19(4), 267-279.

Addressing patient complaints when fine-tuning a hearing aid

Jenstad, L.M., Van Tasell, D.J. & Ewert, C. (2003). Hearing aid troubleshooting based on patient’s descriptions. Journal of the American Academy of Audiology 14 (7).

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

As part of any clinically robust protocol, a hearing aid fitting will be objectively verified with real-ear measures and validated with a speech-in-noise test. Fine tuning and follow-up adjustments are an equally important part of the fitting process. This stage of the routine fitting process does not follow standardized procedures and is almost always directed by a patient’s complaints or descriptions of real-world experience with the hearing aids. This can be a challenging dynamic for the clinician. Patients may have difficulty putting their auditory experience into words and different people may describe similar sound quality issues in different ways.  Additionally, there may be several ways to address any given complaint and a given programming adjustment may not have the same effect on different hearing aids.

Hearing aid manufacturers often include a fine-tuning guide or automated fitting assistant within their software to help the clinician make appropriate adjustments for common patient complaints. There are limitations to the effectiveness of these fine tuning guides in that they are inherently specific to a limited range of products and the suggested adjustments are subject to the expertise and resources of that manufacturer.  The manner in which a sound quality complaint is described may differ between manufacturers and the recommended adjustments in response to the complaint may differ as well.

There have been a number of efforts to develop a single hearing aid troubleshooting guide that could be used across devices and manufacturers (Moore et. al., 1998; Gabrielsson et al., 1979, 1988, 1990; Lundberg et al., 1992; Ovegard et al., 1997). The first and perhaps most challenging step toward this goal has been to determine the most common descriptors that patients use for sound quality complaints. Moore (1998) and his colleagues developed a procedure in which responses on three ratings scales (e.g., “loud versus quiet”, “tinny versus boomy”) were used to make adjustments to gain and compression settings. However, their procedure did not allow for the bevy of descriptors that patients create; limiting potential utility for everyday clinical settings.  Gabrielsson colleagues, in a series of Swedish studies, developed a set of reliable terms to describe sound quality. These descriptors have since been translated and used in English language research (Bentler et al., 1993).

As hearing instruments become more complicated with numerous adjustable parameters, and given the wide range of experience and expertise of individuals fitting hearing instruments today, an independent fine tuning guide is an appealing concept. Lorienne Jenstad and her colleagues proposed an “expert system” for troubleshooting hearing aid complaints.  The authors explained that expert systems “emulate the decision making abilities of human experts” (Tharpe et al., 1993).  To develop the system, two primary questions were asked:

1) What terms do hearing impaired listeners use to describe their reactions to specific hearing aid fitting problems?

2) What is the expert consensus on how these patient complaints can be addressed by hearing aid adjustment?

There were two phases to the project. To address question one, the authors surveyed clinicians for their reports on how patients describe sound quality with regard to specific fitting problems. To address question two, the most frequently reported descriptors from the clinicians’ responses were submitted to a panel of experts to determine how they would address the complaints.

The authors sent surveys to 1934 American Academy of Audiology members and received 311 qualifying responses. The surveys listed 18 open-ended questions designed to elicit descriptive terms that patients would likely use for hearing aid fitting problems. For example, the question “If the fitting has too much low-frequency gain…” yielded responses such as “hollow”, “plugged” and “echo”.  The questions probed common problems related to gain, maximum output, compression, physical fit, distortion and feedback.  The survey responses yielded a list of the 40 most frequent descriptors of hearing aid fitting problems, ranked according to the number of occurrences.

The list of descriptors was used to develop a questionnaire to probe potential solutions for each problem.  Each descriptor was put in the context of, “How would you change the fitting if your patient reports that ___?”, and 23 possible fitting solutions were listed.  These questionnaires were completed by a panel of experts with a minimum of five years of clinical experience. Respondents could offer more than one solution to a problem and the solutions were weighted based on the order in which they were offered. There was strong agreement among experts, suggesting that their responses could be used reliably to provide troubleshooting solutions based on sound quality descriptions. The expert responses also agreed with the initial survey that was sent to the group of 1934 audiologists, supporting the validity of these response sets.

The expert responses resulted in a fine-tuning guide in the form of tables or simplified flow charts. The charts list individual descriptors with potential solutions listed below in the order in which they should be attempted.  For example, below the descriptor “My ear feels plugged”, the first solution is to “increase vent” and the second is to “decrease low frequency gain”. The idea is that the clinician would first try to increase the vent diameter and if that didn’t solve the problem, they would move on to the second option, decreasing low frequency gain. If an attempted solution creates another sound quality problem, the table can be utilized to address that problem in the same way.

The authors correctly point out that there are limitations to this tool and that proposed solutions will not necessarily have the same results with all hearing aids. For instance, depending on the compressor characteristics, raising a kneepoint might increase OR decrease the gain at input levels below the kneepoint. It is up to the clinician to be familiar with a given hearing aid and its adjustable parameters to arrive at the appropriate course of action.

Beyond manipulation of the hearing aid itself, the optimal solution for a particular patient complaint might not be the first recommendation in any tuning guide. For instance, for the fitting problem labeled “Hearing aid is whistling”, the fourth solution listed in the table is “check for cerumen”.  This solution appeared fourth in the ranking based on the frequency of responses from the experts on the panel. However, any competent clinician who encounters a patient with hearing aid feedback should check for cerumen first before considering programming modifications.

The expert system proposed by Jenstad and her colleagues represents a thoroughly examined, reliable step toward development of a universal troubleshooting guide for clinicians. Their paper was published in 2003, so some items should be updated to suit modern hearing aids. For example, current feedback management strategies result in fewer and less challenging feedback problems.  Solutions for feedback complaints might now include, “calibrate feedback management system” versus gain or vent adjustments. Similarly, most hearing aids now have solutions for listening in noise that extend beyond the simple inclusion of directional microphones, so “directional microphone” might not be an appropriately descriptive solution to address complaints about hearing in noise, as the patient is probably already using a directional microphone.

Overall, the expert system proposed by Jenstad and colleagues is a helpful clinical tool; especially if positioned as a guide to help patients find the appropriate terms to describe their perceptions. However, as the authors point out, it is not meant to replace prescriptive methods, measures of verification and validation, or the expertise of the audiologist. The responsibility is with the clinician to be informed about current technology and its implications for real world hearing aid performance and to communicate with their patients in enough detail to understand their patients’ comments and address them appropriately.

References

Bentler, R.A., Nieburh, D.P., Getta, J.P. & Anderson, C.V. ( 1993). Longitudinal study of hearing aid effectiveness II: subjective measures. Journal of Speech and Hearing Research 36, 820-831.

Jenstad, L.M., Van Tasell, D.J. & Ewert, C. (2003). Hearing aid troubleshooting based on patient’s descriptions. Journal of the American Academy of Audiology 14 (7).

Moore, B.C.J., Alcantara, J.I. & Glasberg, B.R. (1998). Development and evaluation of a procedure for fitting multi-channel compression hearing aids. British Journal of Audiology 32, 177-195.

Gabrielsson A. ( 1979). Dimension analyses of perceived sound quality of sound-reproducing systems. Scandinavian Journal of Psychology 20, 159-169.

Gabrielsson, A., Hagerman, B., Bech-Kristensen, T. & Lundberg, G. (1990). Perceived sound quality of reproductions with different frequency responses and sound levels. Journal of the Acoustical Society of America 88, 1359-1366.

Gabrielsson, A. Schenkman, B.N. & Hagerman, B. (1988). The effects of different frequency responses on sound quality judgments and speech intelligibility. Journal of Speech and Hearing Research 31, 166-177.

Lundberg, G., Ovegard, A., Hagerman, B., Gabrielsson, A. & Brandstom, U. (1992). Perceived sound quality in a hearing aid with vented and closed earmold equalized in frequency response. Scandinavian Audiology 21, 87-92.

Ovegard, A., Lundberg, G., Hagerman, B., Gabrielsson, A., Bengtsson, M. & Brandstrom, U. (1997). Sound quality judgments during acclimatization of hearing aids. Scandinavian Audiology 26, 43-51.

Schweitzer, C., Mortz, M. & Vaughan, N. (1999). Perhaps not by prescription – but by perception. High Performance Hearing Solutions 3, 58-62.

Tharpe, A.M., Biswas, G. & Hall, J.W. (1993). Development of an expert system for pediatric auditory brainstem response interpretation. Journal of the American Academy of Audiology 4, 163-171.