Starkey Research & Clinical Blog

Understanding the NAL-NL2

Keidser, G., Dillon, H., Flax, M., Ching, T. & Brewer, S. (2011). The NAL-NL2 prescription procedure. Audiology Research 1 (e24), 88-90.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

For years, the National Acoustics Laboratory’s NAL-NL1 has been the benchmark for compressive, independently derived, prescriptive formulas (Dillon, 1999). The recently introduced NAL-NL2 advances their original formula with knowledge gained from a wealth of empirical data collected with NAL-NL1 (Keidser et al., 2011). Similarities between NAL-NL1 and NAL-NL2 include: primary goals of maximizing speech intelligibility while not exceeding overall normal loudness at a range of input levels and the use of predictive models for speech intelligibility and loudness (Moore & Glasberg, 1997; 2004).

The speech intelligibility model used in both NAL-NL1 and NAL-NL2 differs from the Speech Intelligibility Index (SII; ANSI, 1997). The ANSI SII assumes that, regardless of hearing loss, speech should be fully understood when all speech components are audible. Included in NAL-NL1 is a modification to the SII proposed by Ching and colleagues (1998). This modification or effective audibility factor assumes that as hearing loss becomes more severe less information can be extracted from the speech signal. More recent data have been collected to derive an updated effective audibility factor for use with NAL-NL2 (Keidser et al, 2011).

The NAL-NL2 formula includes constraints to prevent compression ratios from exceeding a maximum value for a given frequency or degree of hearing loss. Modifications were based on data suggesting that users with severe or profound hearing loss prefer lower compression ratios than those prescribed by NAL-NL1, when fitted with fast-acting compression (Keidser, et al 2007). However, there is evidence to suggest that higher compression ratios could be used in this population with slow-acting compression. Therefore, in the case of severe or profound hearing losses, the new formula prescribes lower compression ratios for fittings with fast-acting compression than those with slow-acting compression. For mild and moderate losses, compression speed does not affect prescribed compression ratios.

Based on experimental outcomes with NAL-NL1 fittings, the development of NAL-NL2 took various attributes of the hearing aid user into consideration, such as gender, binaural listening, experience level, age and language. In the case of gender, Keidser and Dillon (2006) studied the real-ear insertion gain measurements for the preferred frequency responses of 187 adults, finding that regardless of experience or degree of hearing loss, female participants preferred an average of 2dB less gain than male participants. As a result, gender differences are factored into each fitting.

The NAL-NL2 method still prescribes greater gain for monaural fittings than it does for binaural fittings. This difference is similar to the NAL-NL1 formula (Dillon et al, 2010). Recent studies suggest that the binaural to monaural loudness difference may be less than previously indicated (Epstein & Florentine, 2009). For symmetrical hearing losses, the binaural difference ranges from 2dB for inputs below 50dB SPL to 6dB for inputs above 90dB SPL, so binaurally fitted users will have higher prescribed compression ratios than monaural users. For asymmetrical losses, the binaural correction decreases as the asymmetry increases.

Experience with hearing aids as it relates to degree of hearing loss is a consideration in the NAL-NL2 formula. Keidser and her colleagues (2008) found that with increasing severity of hearing loss new users prefer progressively less prescribed gain than experienced hearing aid users. Although this observation does not agree with several other studies (e.g. Convery et al., 2005; Smeds et al., 2006), the NAL-NL2 recommends gain adaptation for new hearing aid users with moderate or severe hearing loss. Further details of this discrepancy will be addressed in future publication (Keidser 2012, personal communication)

The developers of the NAL-NL2 formula determined that adults with mild to moderate hearing loss preferred less overall gain for 65dB inputs than would be prescribed by NAL-NL1 (Keidser et al., 2008). This is corroborated by other studies (Smeds et al, 2006; Zakis et al, 2007) in which hearing aid users with mild to moderate hearing loss preferred less gain for high and low level inputs. These reports indicate that participants generally preferred slightly less gain and higher compression ratios than those prescribed by NAL-NL1, a preference that was incorporated into the revised prescriptive procedure.

The NAL-NL2 also takes the hearing aid user’s language into consideration. For speakers of tonal languages slightly more low-frequency gain is prescribed. Increased gain in the low frequency region more effectively conveys fundamental frequency information, an especially important cue for recognition of tonal languages.

Like its predecessor, the NAL-NL2 fitting formula leverages theoretical models of intelligibility and loudness perception to maximize speech recognition without exceeding normal loudness. The revised formula takes into consideration a number of factors other than audiometric information and benefits from extensive empirical data collected using NAL-NL1. Ultimately, the NAL-NL2 procedure results in a slightly flatter frequency response, with relatively more gain across low and high frequencies and less gain in the mid-frequency range than in the NAL-NL1 formula.  The study of objective performance and subjective preference with hearing aids is constantly evolving and the NAL-NL2 prescriptive method may be a step toward achieving increased acceptance by existing hearing aid users and improved spontaneous acceptance by new hearing aid users.

References

American National Standards Institute (1997). Methods for calculation of the speech intelligibility index. ANSI S3.5-1997. Acoustical Society of America, New York.

Ching, T., Dillon, H. & Byrne, D. (1998). Speech recognition of hearing-impaired listeners: Predictions for audibility and the limited role of high frequency amplification. Journal of the Acoustical Society of America 103 (2), 1128-1140.

Dillon, H. (1999). Page Ten: NAL-NL1: A new procedure for fitting non-linear hearing aids. Hearing Journal 52, 10-16.

Dillon, H., Keidser, G., Ching, T. & Flax, (2010). The NAL-NL2 Prescription Formula, conference paper, Audiology Australia 19th National Conference, May 2010.

Epstein, M. & Florentine, M. (2009). Binaural loudness summation for speech and tones presented via earphones and loudspeakers. Ear and Hearing 30(2), 234-237.

Keidser, G. & Dillon, H. (2006). What’s new in prescriptive fittings down under? In: Palmer, C.V., Seewald, R. (Eds.), Hearing Care for Adults 2006. Phonak AG, Stafa, Switzerland, pp. 133-142.

Keidser, G., Dillon, H., Dyrlund, O., Carter, L. & Hartley, D. (2007). Preferred low and high frequency compression ratios among hearing aid users with moderately severe to profound hearing loss. Journal of the American Academy of Audiology 18(1), 17-33.

Keidser, G., O’Brien, A., Carter, L., McLelland, M. & Yeend, I. (2008). Variation in preferred gain with experience for hearing aid users. International Journal of Audiology 47(10), 621-635.

Keidser, G., Dillon, H., Flax, M., Ching, T. & Brewer, S. (2011). The NAL-NL2 prescription procedure. Audiology Research 1 (e24), 88-90.

Moore, B.C.J. & Glasberg, B. (1997). A model of loudness perception applied to cochlear hearing loss. Audiotory Neuroscience 3, 289-311.

Moore, B.C.J. & Glasberg, B. (2004). A revised model of loudness perception applied to cochlear hearing loss. Hearing Research 188, 70-88.

Smeds, K., Keidser, G., Zakis, J., Dillon, H., Leijon, A., Grant, F., Convery, E. & Brew, C. (2006). Preferred overall loudness. II: Listening through hearing aids in field and laboratory tests. International Journal of Audiology 45(1), 12-25.

Zakis, J., Dillon, H. & McDermott, H.J. (2007). The design and evaluation of a hearing aid with trainable amplification parameters. Ear and Hearing 28(6), 812-830.

What motivates hearing aid use?

Jenstad, L. & Moon, J. (2011). Systematic review of barriers and facilitators to hearing aid uptake in older adults. Audiology Research 1:e25, 91-96.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Though some causes of adult-onset hearing loss are treated medically or surgically, hearing aid use is by far the most common treatment. Yet 25% of adults who could benefit from hearing instruments actually wear them (Kochkin, 2000; Meister et al., 2008). A number of studies have examined the factors that prevent individuals from purchasing hearing aids and Jenstad and Moon’s objective was to systematically review the literature to identify the main barriers to hearing aid uptake in older adults.

They included subjective and objective reports, but limited this investigation to studies with more than 50 subjects over the age of 65, who had never used hearing aids, had at least mild to moderate sensorineural hearing loss and were in good general health. From an initial set of 388 abstracts, they eliminated studies about children, cochlear implants, medical aspects of hearing loss, auditory processing or hearing aid outcomes.  From the remaining 50 articles, the report focused on 14 papers that met the inclusion criteria. Hearing aid uptake was defined as a hearing aid purchase, but some studies included willingness to purchase.  Based on the literature review, Jenstad and Moon identified a set of predictors of hearing aid uptake in older adults. Some of the predictors they described may be helpful discussion points for clinicians counseling potential hearing aid users.

Self-reported hearing loss was evaluated in questionnaires and hearing-handicap indices that examined hearing-related quality of life as well as activity and participation limitation (Chang et al., 2009; Helvik et al., 2008; Garstecki & Erler, 1998; Meister et al., 2008; Palmer, 2009).  Not surprisingly, as self-reported hearing loss increased, study participants were more likely or willing to obtain hearing aids.  In other words, the more aware individuals were of their hearing-related difficulties, the more likely they were to purchase hearing aids. With this in mind, clinicians should instruct unmotivated hearing aid candidates to pay close attention to their hearing-related difficulties while determining need for amplification. Work together to identify activities that patients must do (e.g., work) or enjoy doing (e.g., dining out, going to the theater). Using this information to understand the extent to which hearing loss disrupts their communication ability in these situations will enlighten the patient to the extent of their own hearing handicap and point towards opportunities for treatment and counseling.

Stigma was predictive of hearing aid acceptance in some studies (Franks & Beckmann, 1985; Garstecki & Erler, 1998; Kochkin, 2007; Meister et al., 2008; Wallhagen, 2010), but overall was inconsistent in its effect on hearing aid uptake. In 1985, Franks & Beckmann found that stigma was the highest concern among their subjects, whereas in 2008, Meister and his associates found that stigma only accounted for 8% of the variability in hearing aid uptake. The negative stigma associated with hearing aids is assumed to relate to the appearance of the aid and the perception of hearing loss by other people. Therefore, hearing aid users with high concern desire small, discreet instruments. Improvements in technology allow for smaller, sleeker designs that make the hearing aid—and hearing loss—less noticeable. Therefore, hearing aid users no longer have to make an obvious acknowledgement of their hearing impairment.

Degree of hearing loss was a significant factor in the decision to obtain hearing aids, but the effect seems to be modified by gender. Garstecki & Erler (1998) found that degree of hearing loss was more likely to affect hearing aid uptake for females than males, but this finding was not reported in other studies. In general, as degree of hearing loss worsens people are more willing to wear hearing aids. Detailed discussion of audiometric findings, with visual references to speech and environmental sound levels helps to familiarize the patient and their family with degrees of hearing loss and the impact on speech perception.  Using tools like hearing loss simulators offer a convenient tool for educationing and motivating patients toward the acceptance of hearing aids.

Personality and psychological factors affected hearing aid uptake in three studies (Cox et al., 2005; Garstecki & Erler, 1998; Helvik et al., 2008). Cox and her colleagues found that hearing aid “seekers” were less neurotic, less open and more agreeable than those who did not seek hearing aids.  Internal locus of control predicted hearing aid acceptance in Cox’s study, but Garstecki and Erler found that it was only predictive for female subjects. Though locus of control is one among many factors influencing the decision, the choice to obtain hearing aids should be presented as a way to assume control of the hearing impairment and make proactive steps toward improving communication abilities.

Helvik (2008) found that subjects who reported using fewer maladaptive coping strategies, such as dominating conversations or avoid social situations, were less likely to accept hearing aids. Many hearing-impaired individuals use poor coping strategies without realizing it. It seems counterintuitive that reported use of maladaptive strategies would be inversely related to hearing aid acceptance, but the authors surmised that the study participants who rejected hearing aids may have been in denial about both the hearing loss and their use of poor communication strategies. Hearing impaired individuals may not be aware of the extent of their communication difficulties and may not realize how often they are misunderstanding conversation or requiring others to make extra efforts. Including family members in the discussion of hearing aid candidacy is critical, to make the hearing-impaired individual aware of how their loss affects others and how the use of poor or ineffective strategies may result in frustration for themselves and other conversational participants.

Cost was a barrier to hearing aid use in some studies but was not a significant factor in others (Meister et al., 2008). But Jenstad and Moon point out that cost may affect hearing aid acceptance in more than one way. Pointing out that Kochkin’s 2007 survey found that 64% of respondents reported that they could not afford hearing aids, whereas 45% of respondents said that hearing aids are not worth the expense. There are ways in which clinicians can address both of these issues with hearing aid candidates. First, improvements in technology have made quality instruments available at a wide range of prices. Most manufacturers offer a broad product line, with entry-level instruments in custom and BTE styles. Clients should be assured that their hearing loss, lifestyle and listening needs will determine a range of options to choose from. Lower-cost hearing aids might require more manual adjustment than aids with sophisticated automatic features, but with proper training and programming some lower cost options might work quite well. Additionally, unbundled pricing and financing options may help potential hearing aid users afford the purchase price. Together, these strategies make cost less of a barrier for many potential hearing aid candidates.

Kochkin’s finding that 45% of respondents felt hearing aids were not worth the expense is perhaps more difficult to address. Some of the bias against hearing aids is related to inappropriate hearing aid selection or inadequate training and follow-up care. Most clinicians have encountered clients with a friend or neighbor who doesn’t like their hearing aids. Negative experiences with hearing aids may be more likely related to selection, programming and follow-up care than the quality of the hearing instruments themselves. The finest hearing aid available will be rejected if it is inappropriate for the user’s hearing loss or lifestyle or is programmed improperly. Unfortunately, many people who have an unsuccessful experience have acquired their hearing aids through non-clinical channels. These people often blame their dissatisfaction on the quality of the hearing aid, contributing to a larger general perception that hearing aids are not worth the price. Clinicians must emphasize the importance of the care that they provide. Thorough verification, validation and follow-up care by well-trained, credentialed clinical specialists will affect patient’s perception and lead them toward success.

The effect of age on hearing aid uptake was unclear in Jenstad and Moon’s review. One study showed a slight increase in hearing aid uptake with increasing age (Helvik et al, 2008), whereas another showed a stronger increase with age (Hidalgo, 2009). In contrast, Uchida et al. (2008) found that hearing aid uptake decreased with increasing age. The effect of age, if any, on hearing aid acceptance will be confounded by other variables such as degree of loss, lifestyle, general health and financial constraints. Therefore, age should be a minor consideration with reference to hearing aid candidacy but remains highly relevant when discussing specific options such as manual controls, automatic features and hearing aid styles.

Gender affected the predictive value of several factors including stigma, degree of loss and locus of control. Hidalgo (2009) found that in general males were more likely to report a need for hearing aids than were females. Gender in itself might not be a strong predictor, so it probably should not be specifically considered in discussions with potential hearing aid users as other variables appear to have more impact on the decision to pursue hearing aids.

Franks and Beckmann reported that individuals who chose not to purchase hearing aids were more likely to report that hearing aids were inconvenient to wear. Though the study was done in 1985, their findings merit consideration today.  Since then, hearing aids have become smaller, more effective and less troublesome because of advances like feedback cancellation, directionality and noise reduction. However, the fact remains that hearing aids must be worn, cleaned and cared for daily and in most cases batteries must be changed on a weekly basis.  Use and care guidelines should be balanced by discussion of the likely benefits of hearing aid use and the positive effect they have on communication in everyday situations.  With the technological sophistication that today’s hearing aids offer the known benefits should outweigh any perceived inconvenience.

Jenstad and Moon have clarified some of the primary barriers to hearing aid uptake, providing useful information for clinicians working with hearing aid candidates. The predictors they discussed can be addressed systematically to quell concerns about and underscore the need for hearing instruments. Discussing these issues at the outset may encourage motivated clients to proceed with a hearing aid purchase and provide helpful considerations for those who are not yet ready to pursue amplification. With many potential places to purchase and limited information to guide patients toward qualified hearing care professionals, internet sales offer the appealing promise of quality hearing instruments at lower costs than may be found in a clinic.  But consumers must be educated that a key to successful hearing aid use is the support of the professional, not the quality of the device itself.  Anyone can “sell” a quality hearing aid but only a trained professional can make appropriate clinical decisions and recommendations.

References

Chang, H.P., Ho, C.Y. & Chou, P. (2009). The factors associated with a self-perceived hearing handicap in elderly people with hearing impairment – results from a community-based study. Ear and Hearing 30(5), 576-583.

Cox, R.M., Alexander, G.C. & Gray, G.A. (2005). Who wants a hearing aid? Personality profiles of hearing aid seekers. Ear and Hearing 26(1), 12-26.

Franks, J.R. & Beckmann, N.J. (1985). Rejection of hearing aids: attitudes of a geriatric sample. Ear and Hearing 6(3), 161-166.

Garstecki, D.C. & Erler, S. F. (1998). Hearing loss, control and demographic factors influencing hearing aid use among older adults. Journal of Speech, Language and Hearing Research 41(3), 527-537.

Helvik, A.S., Wennberg, S., Jacobsen, G. & Hallberg, L.R. (2008). Why do some individuals with objectively verified hearing loss reject hearing aids? Audiological Medicine 6(2), 141-148.

Hidalgo, J.L., Gras, C.B., Lapeira, J.T., Verdejo, M.A., del Campo, D.C. & Rabadan, F.E. (2009). Functional status of elderly people with hearing loss. Archives of Gerontology and Geriatrics 49(1), 88-92.

Jenstad, L. & Moon, J. (2011). Systematic review of barriers and facilitators to hearing aid uptake in older adults. Audiology Research 1:e25, 91-96.

Kochkin, S. (2000). MarkeTrak V: “Why my hearing aids are in the drawer”: the consumer’s perspective. Hearing Journal 53(2), 34-41.

Meister, H., Walger, M., Brehmer, D., von Wedel, U. & von Wedel, J. (2008). The relationship between pre-fitting expectations and willingness to use hearing aids. International Journal of Audiology 47(4), 153-159.

Palmer, C.V., Solodar, H.S., Hurley, W.R., Byrne, D.C. & Williams, K.O. (2009). Self-perception of hearing ability as a strong predictor of hearing aid purchase. Journal of the American Academy of Audiology 20(6), 341-347.

Do hearing aid wearers benefit from visual cues?

Wu, Y-H. & Bentler, R.A. (2010) Impact of visual cues on directional benefit and preference: Part I – laboratory tests. Ear and Hearing 31(1), 22-34.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors. 

The benefits of directional microphone use have been consistently supported by experimental data in the laboratory (Valente et al. 1995; Ricketts & Hornsby 2006; Gravel et al. 1999; Kuk et al. 1999). Similarly, hearing aid users have indicated a preference for directional microphones over omnidirectional processing in noise in controlled environments (Preves et al. 1999; Walden et al. 2005; Amlani et al. 2006). Despite the robust directional benefit reported in laboratory studies, field studies have yielded less impressive results; with some studies reporting perceived benefit (Preves et al. 1999; Ricketts et al. 2003) while others have not (Walden et al. 2000; Cord et al. 2002, 2004; Palmer et al. 2006).

One factor that could account for reduced directional benefit reported in field studies is the availability of visual cues. It is well established that visual cues, including lip-reading (Sumby & Pollack 1954) as well as eyebrow (Bernstein et al. 1989) and head movements (Munhall et al. 2004), can improve speech recognition ability in the presence of noise. In field studies, the availability of visual cues could result in a decreased directional benefit due to ceiling effects. In other words, the benefit of audio-visual (AV) speech cues might result in omnidirectional performance so close to a listener’s maximum ability that directionality may offer only limited additional improvement.  This could reduce both measured and perceived directional benefits.  It follows that ceiling effects from the availability of AV speech cues could also reduce the ability of auditory-only (AO) laboratory findings to accurately predict real-world performance.

Few studies have investigated the effect of visual cues on hearing aid performance or directional benefit. Wu and Bentler’s goal in the current study was to determine if visual cues could partially account for the discrepancy between laboratory and field studies of directional benefit. They outlined three experimental hypotheses:

1. Listeners would obtain less directional benefit and would prefer directional over omnidirectional microphone modes less frequently in auditory-visual (AV) conditions than in auditory-only (AO) conditions.

2. The AV directional benefit would not be predicted by the AO directional benefit.

3. Listeners with greater lip-reading skills would obtain less AV directional benefit than would listeners with lesser lip-reading skills.

Twenty-four adults with hearing loss participated in the study. Participants were between the ages of 20-79 years, had bilaterally symmetrical, downward-sloping, sensorineural hearing losses, normal or corrected normal vision and were native English speakers. Participants were fitted with bilateral, digital, in-the-ear hearing instruments with manually-accessible omnidirectional and directional microphone modes.

Directional benefit was assessed with two speech recognition measures:  the AV version of the Connected Speech Test (CST; Cox et al., 1987) and the Hearing in Noise Test (HINT; Nilsson et al., 1994). For the AV CST the talker was displayed on a 17” monitor. Participants listened to sets of CST sentences again in a second session to evaluate subjective preference for directional versus omnidirectional microphone modes. Speech stimuli were presented in six signal-to-noise (SNR) conditions ranging from -10dB to +10dB in 4db steps. Lip-reading ability was assessed with the Utley test (Utley, 1946), an inventory of 31 sentences recited without sound or facial exaggeration.

Analysis of the CST scores yielded significant main effects for SNR, microphone mode and presentation mode (AV vs. AO) as well as significant interactions among the variables. The benefit for visual cues was greater than the benefit afforded by directionality. As the authors expected, for most SNRs the directional benefit was smaller for AV conditions than AO conditions with the exception of the poorest SNR condition, -10dB.  Scores for all conditions (AV-DIR, AV-OMNI, AO-DIR, AO-OMNI) plateau at ceiling levels for the most favorable SNRs; meaning that both AV benefit and directional benefit decreased as SNR improved to +10dB.  HINT scores, which did not take into account ceiling effects, yielded a significant mean directional benefit of 3.9dB.

Participants preferred the directional microphone mode in the AO condition, especially at SNRs between -6dB to +2dB. At more favorable SNRs, there was essentially no preference. In the AV condition, participants were less likely to prefer the directional mode, except at the poorest SNR, -10dB. Further analysis revealed that the odds of preferring directional mode in AO condition were 1.37 times higher than in the AV condition. In other words, adding visual cues reduced overall preference for the directional microphone mode.

At intermediate and favorable SNRs there was no significant correlation between AV directional benefit and the Utley lip-reading scores. For unfavorable SNRs, the negative correlation between these variables was significant, indicating that in the most difficult listening conditions, listeners with better lip-reading skills obtained less AV directional benefit than those participants who were less adept at lip-reading.

The outcomes of these experiments generally support the authors’ hypotheses. Visual cues significantly improved speech recognition scores in omnidirectional trials close to ceiling levels, reducing directional benefit and subjective preference for directional microphone modes.  Auditory-only (AO) performance, typical of laboratory testing, was not predictive of auditory-visual (AV) performance. This is in agreement with prior indications that AO directional benefit as measured in laboratory conditions doesn’t match real-world directional benefit and suggests that the availability of visual cues can at least partially explain the discrepancy.  The authors suggested that directional benefit should theoretically allow a listener to rely less on visual cues. However, face-to-face conversation is natural and hearing-impaired listeners should leverage avoid visual cues when they are available.

The results of Wu and Bentler’s study suggest that directional microphones may provide only limited additional benefit when visual cues are available, for all but the most difficult listening environments. In the poorest SNRs, directional microphones may be leverages for greater benefit.  Still, the authors point out that mean speech recognition scores were best when both directionality and visual cues were available. It follows that directional microphones should be recommended for use in the presence of competing noise, especially in high noise conditions. Even if speech recognition ability is not significantly improved with the use of directional microphones in many typical SNRs, there may be other subjective benefits to directionality, such as reduced listening effort, distraction or annoyance that listeners respond favorably to.

It is important for clinicians to prepare new users of directional microphones to have realistic expectations. Clients should be advised that directionality can reduce competing noise but not eliminate it. Hearing aid users should be encouraged to consider their positioning relative to competing noise sources and always face the speech source that they wish to attend to.  Although visual cues appear to offer greater benefits to speech recognition than directional microphones alone; the availability of visual speech cues may be compromised by poor lighting, glare, crowded conditions or visual disabilities, making directional microphones all the more important for many everyday situations. Thus all efforts should be made to maximize directionality and the availability of visual cues in day-to-day situations, as both offer potential real-world benefits.

References

Amlani, A.M., Rakerd, B. & Punch, J.L. (2006). Speech-clarity judgments of hearing aid processed speech in noise: differing polar patterns and acoustic environments. International Journal of Audiology 12, 202-214.

Bernstein, L.E., Eberhardt, S.P. & Demorest, M.E. (1989). Single-channel vibrotactile supplements to visual perception of intonation and stress. Journal of the Acoustical Society of America 85, 397-405.

Cord, M.T., Surr, R.K., Walden, B.E., et al. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology 13, 295-307.

Cord, M.T., Surr, R.K., Walden, B.E., et al. (2004). Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids. Journal of the American Academy of Audiology 15, 353-364.

Cox, R.M., Alexander, G.C. & Gilmore, C. (1987). Development of the Connected Speech Test (CST). Ear and Hearing 8, 119S-126S.

Gravel, J.S., Fausel, N., Liskow, C., et al. (1999). Children’s speech recognition in noise using omnidirectional and dual-microphone hearing aid technology. Ear and Hearing 20, 1-11.

Kuk, F., Kollofski, C., Brown, S., et al. (1999). Use of a digital hearing aid with directional microphones in school-aged children. Journal of the American Academy of Audiology 10, 535-548.

Lee L., Lau, C. & Sullivan, D. (1998). The advantage of a low compression threshold in directional microphones. Hearing Review 5, 30-32.

Leeuw, A.R. & Dreschler, W.A. (1991). Advantages of directional hearing aid microphones related to room acoustics. Audiology 30, 330-344.

Munhall, K.G., Jones, J.A., Callan, D.E., et al. (2004). Visual prosody and speech intelligibility: head movement improves auditory speech perception. Psychological Science 15, 133-137.

Nilsson, M., Soli, S. & Sullivan, J.A. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95, 1085-1099.

Palmer, C., Bentler, R., & Mueller, H.G. (2006). Evaluation of a second order directional microphone hearing aid: Part II – Self-report outcomes. Journal of the American Academy of Audiology 17, 190-201.

Preves, D.A., Sammeth, C.A. & Wynne, M.K. (1999). Field trial evaluations of a switched directional/omnidirectional in-the-ear hearing instrument.  Journal of the American Academy of Audiology 10, 273-284.

Ricketts, T. (2000b). Impact of noise source configuration on directional hearing aid benefit and performance. Ear and Hearing 21, 194-205.

Ricketts, T. & Hornsby, B.W. (2003). Distance and reverberation effects on directional benefit. Ear and Hearing 24, 472-484.

Ricketts, T. & Hornsby, B.W. (2006). Directional hearing aid benefit in listeners with severe hearing loss. International Journal of Audiology 45, 190-197.

Ricketts, T., Henry, P. & Gnewikow, D. (2003). Full time directional versus user selectable microphone modes in hearing aids. Ear and Hearing 24, 424-439.

Sumby, W.H. & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26, 212-215.

Valente, M., Fabry, D.A. & Potts, L.G. (1995). Recognition of speech in noise with hearing aids using dual microphones. Journal of the American Academy of Audiology 6, 440-449.

Walden, B.E., Surr, R.K., Cord, M.T., et al. (2000). Comparison of benefits provided by different hearing aid technologies. Journal of the American Academy of Audiology 11, 540-560.

Walden, B.E., Surr, R.K . Grant, K.W., et al. (2005). Effect of signal-to-noise ratio on directional microphone benefit and preference. Journal of the American Academy of Audiology 16, 662-676.

Wu, Y-H. & Bentler, R.A. (2010) Impact of visual cues on directional benefit and preference: Part I – laboratory tests. Ear and Hearing 31(1), 22-34.

Differences Between Directional Benefit in the Lab and Real-World

Relationship Between Laboratory Measures of Directional Advantage and Everyday Success with Directional Microphone Hearing Aids

Cord, M., Surr, R., Walden, B. & Dyrlund, O. (2004). Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids.Journal of the American Academy of Audiology 15, 353-364.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

People with hearing loss require a better signal-to-noise ratio (SNR) than individuals with normal hearing (Dubno et al, 1984; Gelfand et al, 1988; Bronkhorst and Plomp, 1990).  Among many technological improvements, a directional microphone is arguably the only effective hearing aid feature for improving SNR and subsequently, improving speech understanding in noise. A wide range of studies support the benefit of directionality for speech perception in competing noise (Agnew & Block, 1997; Nilsson et al, 1994; Ricketts and Henry, 2002; Valente, 1995) Directional benefit is defined as the difference in speech recognition ability between omnidirectional and directional microphone modes. In laboratory conditions, directional benefit averages around 7-8dB but varies considerably and has ranged from 2-3dB up to 14-16dB (Valente et al, 1995; Agnew & Block, 1997).

An individual’s perception of directional benefit varies considerably among hearing aid users. Cord et al (2002) interviewed individuals who wore hearing aids with switchable directional microphones and 23% reported that they did not use the directional feature. Many respondents said they had initially tried the directional mode but did not notice adequate improvement in their ability to understand speech and therefore stopped using the directional mode. This discrepancy between measured and perceived benefit has prompted exploration of the variables that affect performance with directional hearing aids. Under laboratory conditions, Ricketts and Mueller (2000) examined the effect of audiometric configuration, degree of high frequency hearing loss and aided omnidirectional performance on directional benefit, but found no significant interactions among any of these variables.

The current study by Cord and her colleagues examined the relationship between measured directional advantage in the laboratory and success with directional microphones in everyday life. The authors studied a number of demographic and audiological variables, including audiometric configuration, unaided SRT, hours of daily hearing aid use and length of experience with current hearing aids, in an effort to determine their value for predicting everyday success with directional microphones.

Twenty hearing-impaired individuals were selected to participate in one of two subject groups. The “successful” group consisted of individuals who reported regular use of omnidirectional and directional microphone modes. The “unsuccessful” group of individuals reported not using their directional mode and using their omnidirectional mode all the time. Analysis of audiological and demographic information showed that the only significant differences in audiometric threshold between the successful and unsuccessful group were at 6-8 kHz, otherwise the two groups had very similar audiometric configurations, on average. There were no significant differences between the two groups for age, unaided SRT, unaided word recognition scores, hours of daily use or length of experience with hearing aids.

Subjects were fitted with a variety of styles – some BTE and some custom – but all had manually accessible omnidirectional and directional settings. The Hearing in Noise Test (HINT; Nilsson et al, 1994) was administered to subjects with their hearing aids in directional and omnidirectional modes. Sentence stimuli were presented in front of the subject and correlated competing noise was presented through three speakers: directly behind the subject and on each side. Following the HINT participants completed the Listening Situations Survey (LSS), a questionnaire developed specifically for this study. The LSS was designed to assess how likely participants were to encounter disruptive background noise in everyday situations, to determine if unsuccessful and successful directional microphone users were equally likely to encounter noisy situations in everyday life.  The survey consisted of four questions:

1) On average, how often are you in listening situations in which bothersome background noise is present?

2) How often are you in social situations in which at least 3 other people are present?

3) How often are you in meetings (e.g. community, religious, work, classroom, etc.)?

4) How often are you talking with someone in a restaurant or dining hall setting?

The HINT results suggest average directional benefit of 3.2dB for successful users and 2.1dB for unsuccessful users. Although directional benefit was slightly greater for the successful users, the difference between the groups was not statistically significant.  There was a broad range of directional benefit for both groups: from -0.8 to 6.0dB for successful users and from -3.4 to 10.5dB for the unsuccessful users. Interestingly, three of the ten successful users obtained little or no directional benefit, whereas seven of the ten unsuccessful users obtained positive directional benefit.

Analysis of the LSS results showed that successful users of directional microphones were somewhat more likely than unsuccessful users to encounter listening situations with bothersome background noise and to encounter social situations with more than three other people present. However, statistical analysis showed no significant differences between the two groups for any items on the LSS survey, indicating that users who perceived directional benefit and used their directional microphones were not significantly more likely to encounter noisy situations in everyday life.

These observations led the authors to conclude that directional benefit as measured in the laboratory did not predict success with directional microphones in everyday life. Some participants with positive directional advantage scores were unsuccessful directional microphone users and conversely, some successful users showed little or no directional advantage. There are a number of potential explanations for their findings. First, despite the LSS results, it is possible that unsuccessful users did not encounter real-life listening situations in which directional microphones would be likely to help. Directional microphone benefit is dependent on specific characteristics of the listening environment (Cord et al, 2002; Surr et al, 2002; Walden et al, 2004), and is most likely to help when the speech source is in front of and relatively close to the listener, with spatial separation between the speech and noise sources. Individuals who rarely encounter this specific listening situation would have limited opportunity to evaluate directional microphones and may therefore perceive only limited benefit from them.

Unsuccessful directional microphone users may have also had unrealistically high expectations about directional benefits. Directionality can be a subtle but effective way of improving speech understanding in noise. Reduction of sound from the back and sides helps the listener focus attention on the speaker and ignore competing noise. Directional benefit is based on the concept of face-to-face communication, if users expect their hearing aids to reduce all background noise from all angles they are likely to be disappointed. Similarly, if they expect the aids to completely eliminate background noise, rather than slightly reduce it, they will be unimpressed. It is helpful for hearing aid users, especially those new to directional microphones, to be counseled about realistic expectations as well as proper positioning in noisy environments. If listeners know what to expect and are able to position themselves for maximum directional effect they are more likely to perceive benefit from their hearing aids in noisy conditions.

To date, it has been difficult to correlate directional benefit under laboratory conditions with perceived directional benefit. It is clear that directionality offers performance benefits in noise, but directional benefit measured in a sound booth does not seem to predict everyday success with directional microphones. There are many factors that are likely affect real-life performance with directional microphone hearing aids, including audiometric variables, the frequency response and gain equalization of the directional mode, the venting of the hearing aid and the contribution of visual cues to speech understanding (Ricketts, 2000a; 2000b). Further investigation is still needed to elucidate the impact of these variables on the everyday experiences of hearing aid users.

As is true for all hearing aid features, directional microphones must be prescribed appropriately and hearing aid users should be counseled about realistic expectations and appropriate circumstances in which they are beneficial. Although most modern hearing instruments have the ability to adjust automatically to changing environments, manually accessed directional modes offer hearing aid wearers increased flexibility and may increase use by allowing the individual to make decisions regarding their improved comfort and performance in noisy places. Routine reinforcement of techniques for proper directional microphone use are encouraged. Hearing aid users should be encouraged to experiment with their directional programs to determine where and when they are most helpful. For the patient, proper identification of and positioning in noisy environments is essential step toward meeting their specific listening needs and preferences.

References

Agnew, J. & Block, M. (1997). HINT thresholds for a dual-microphone BTE. Hearing Review 4, 26-30.

Bronkhorst, A. & Plomp, R. (1990). A clinical test for the assessment of binaural speech perception in noise. Audiology 29, 275-285.

Cord, M.T., Surr, R.K., Walden, B.E. & Olson, L. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology 13, 295-307.

Cord, M., Surr, R., Walden, B. & Dyrlund, O. (2004). Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids. Journal of the American Academy of Audiology 15, 353-364.

Dubno, J.R., Dirks, D.D. & Morgan, D.E. (1984).  Effects of age and mild hearing loss on speech recognition in noise. Journal of the Acoustical Society of America 76, 87-96.

Gelfand, S.A., Ross, L. & Miller, S. (1988). Sentence reception in noise from one versus two sources: effects of aging and hearing loss. Journal of the Acoustical Society of America 83, 248-256.

Kochkin, S. (1993). MarkeTrak III identifies key factors in determining customer satisfaction. Hearing Journal 46, 39-44.

Nilsson, M., Soli, S.D. & Sullivan, J.A. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95, 1085-1099.

Ricketts, T. (2000a). Directivity quantification in hearing aids: fitting and measurement effects. Ear and Hearing 21, 44-58.

Ricketts, T. (2000b). Impact of noise source configuration on directional hearing aid benefit and performance. Ear and Hearing 21, 194-205.

Ricketts, T. (2001). Directional hearing aids. Trends in Amplification 5, 139-175.

Ricketts, T.  & Henry, P. (2002). Evaluation of an adaptive, directional microphone hearing aid. International Journal of Audiology 41, 100-112.

Ricketts, T. & Henry, P. (2003). Low-frequency gain compensation in directional hearing aids. American Journal of Audiology 11, 1-13.

Ricketts, T. & Mueller, H.G. (2000). Predicting directional hearing aid benefit for individual listeners. Journal the American Academy of Audiology 11, 561-569.

Surr, R.K., Walden, B.E. Cord, M.T. & Olson, L. (2002). Influence of environmental factors on hearing aid microphone preference. Journal of the American Academy of Audiology 13, 308-322.

Valente, M., Fabry, D.A. & Potts, L.G. (1995). Recognition of speech in noise with hearing aids using dual microphones. Journal of the American Academy of Audiology 6, 440-449.

Walden, B.E., Surr, R.K., Cord, M.T. & Dyrlund, O. (2004). Predicting microphone preference in everyday living. Journal of the American Academy of Audiology 15, 365-396.

Are you prescribing an appropriate MPO?

Effect of MPO and Noise Reduction on Speech Recognition in Noise

Kuk, F., Peeters, H., Korhonen, P. & Lau, C. (2010). Effect of MPO and noise reduction on speech recognition in noise. Journal of the American Academy of Audiology, submitted November 2010.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the original authors.

A component of clinical best practice would suggest that clinicians determine a patient’s uncomfortable listening levels in order to prescribe the output limiting characteristics of a hearing aid (Hawkins et al., 1987). The optimal maximum power output (MPO) should be based on two goals: preventing loudness discomfort and avoiding distorted sound quality at high input levels. The upper limit of a prescribed MPO must allow comfortable listening; less consideration is given to the consequences that under prescribing MPO might have on hearing aid and patient performance.

There are two primary concerns related to the acceptable lower MPO limit: saturation and insufficient loudness. Saturation occurs when the input level of a stimulus plus gains applied by the hearing aid exceed the MPO, causing distortion and temporal smearing (Dillon & Storey, 1998). This results in a degradation of speech cues and a perceived lack of clarity, particularly in the presence of competing noise. Similarly, insufficient loudness reduces the availability of speech cues. There are numerous reports of subjective degradation of sound when MPO is set lower than prescribed levels, particularly in linear hearing instruments (Kuk et al., 2008; Storey et al., 1998; Preminger, et al., 2001). There is not yet consensus on whether low MPO levels also cause objective degradation in performance.

The purpose of the study described here was to determine if sub-optimal MPO could affect speech intelligibility in the presence of noise, even in a multi-channel, nonlinear hearing aid. Furthermore, the authors examined if gain reductions from a noise reduction algorithm could mitigate the detrimental effects of the lower MPO. The authors reasoned that a reduction in output at higher input levels, via compression and noise reduction, could reduce saturation and temporal distortion.

Eleven adults with flat, severe hearing losses participated in the reviewed study. Subjects were fitted bilaterally with 15-channel, wide dynamic range compression, behind-the-ear hearing aids. Microphones were set to omnidirectional and other than noise reduction, no special features were activated during the study. Subjects responded to stimuli from the Hearing in Noise Test (HINT, Nilsson et al., 1994) presented at a 0-degree azimuth angle in the presence of continuous speech-shaped noise. The HINT stimuli yielded reception thresholds for speech (RTS) scores for each test condition.

Test conditions included two MPO prescriptions: the default MPO level (Pascoe, 1989) and 10dB below that level. The lower setting was chosen based on previous work that reported an approximately 18dB acceptable MPO range for listeners with severe hearing loss  (Storey et al., 1998). MPOs set at 10dB below default would therefore be likely to approach the low end of the acceptable range, resulting in perceptual consequences. Speech-shaped noise was presented at two levels: 68dB and 75dB. Testing was done with and without digital noise reduction (DNR).

Analysis of the HINT RTS scores yielded significant main effects of MPO and DNR, as well as significant interactions between MPO and DNR, and DNR and noise level. There was no significant difference between the two noise level conditions. Subjects performed better with the default MPO setting versus the reduced MPO setting. The interaction between the MPO and DNR showed that subjects’ performance in the low-MPO condition was less degraded when DNR was activated. These findings support the authors’ hypotheses that reduced MPO can adversely affect speech discrimination and that noise reduction processing can at least partially mitigate these adverse effects.

Prescriptive formulae have proven to be reasonably good predictors of acceptable MPO levels (Storey et al., 1988; Preminger et al., 2001). In contrast, there is some question as to the value of clinical UCL testing prior to fitting, especially when validation with loudness measures is performed after the fitting (Mackersie, 2006). Improper instruction for the UCL task may yield inappropriately low UCL estimates, resulting in compromised performance and sound quality. The authors of the current paper recommend following prescriptive targets for MPO and conducting verification measures after the fitting, such as real-ear saturation response (RESR) and subjective loudness judgments.

Another scenario, and an ultimately avoidable one, involves individuals who have been fitted with inappropriate instruments for their loss, usually because of cosmetic concerns. It is unfortunately not so unusual for some individuals with severe hearing losses to be fitted with RIC or CIC instruments because of their desirable cosmetic characteristics. Smaller receivers will likely have MPOs that are too low for hearing aid users with severe hearing loss. Many hearing-aid users may not realize they are giving anything up when they select a CIC or RIC and may view these styles as equally appropriate options for their loss. The hearing aid selection process must therefore be guided by the clinician; clients should be educated about the benefits and limitations of various hearing aid options and counseled about the adverse effects of under-fitting their loss with a more cosmetically appealing option.

The results of the current study are important because they illuminate an issue related to hearing aid output that might not always be taken into clinical consideration. MPO settings are usually thought of as a way to prevent loudness discomfort, so the concern is to avoid setting the MPO too high. Kuk and his colleagues have shown that an MPO that is too low could also have adverse effects and have provided valuable information to help clinicians select appropriate MPO settings. Additionally, their findings show objective benefits and support the use of noise reduction strategies, particularly for individuals with reduced dynamic range due to severe hearing loss or tolerance issues. Of course their findings may not be generalizable to all multi-channel compression instruments, with the wide variety of compression characteristics that are available, but they present important considerations that should be examined in further detail with other instruments.

References

ANSI (1997). ANSI S3.5-1997. American National Standards methods for the calculation of the speech intelligibility index. American National Standards Institute, New York.

Dillon, H. & Storey, L. (1998). The National Acoustic Laboratories’ procedure for selecting the saturation sound pressure level of hearing aids: theoretical derivation. Ear and Hearing 19(4), 255-266.

Hawkins, D., Walden, B., Montgomery, A. & Prosek, R. (1987). Description and validation of an LDL procedure designed to select SSPL90. Ear and Hearing 8, 162-169.

Kuk , F., Korhonen, P., Baekgaard, L. & Jessen, A. (2008). MPO: A forgotten parameter in hearing aid fitting. Hearing Review 15(6), 34-40.

Kuk et al., (2010). Effect of MPO and noise reduction on speech recognition in noise. Journal of the American Academy of Audiology, submitted November 2010, fast track article.

Kuk, F. & Paludan-Muller, C. (2006). Noise management algorithm may improve speech intelligibility in noise. Hearing Journal 59(4), 62-65.

Mackersie, C. (2006). Hearing aid maximum output and loudness discomfort: are unaided loudness measures needed? Journal of the American Academy of Audiology 18 (6), 504-514.

Nilsson, M., Soli, S. & Sullivan, J. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95(2), 1085-1099.

Pascoe, D. (1989). Clinical measurements of the auditory dynamic range and their relation to formulae for hearing aid gain. In J. Jensen (Ed.), Hearing Aid Fitting: Theoretical and Practical Views. Proceedings of the 13th Danavox Symposium. Copenhagen: Danavox, pp. 129-152.

Preminger, J., Neuman, A. & Cunningham, D. (2001). The selection and validation of output sound pressure level in multichannel hearing aids. Ear and Hearing 22(6), 487-500.

Storey, L., Dillon, H., Yeend, I. & Wigney, D. (1998). The National Acoustic Laboratories, procedure for selecting the saturation sound pressure level of hearing aids: experimental validation. Ear and Hearing 19(4), 267-279.

Addressing patient complaints when fine-tuning a hearing aid

Jenstad, L.M., Van Tasell, D.J. & Ewert, C. (2003). Hearing aid troubleshooting based on patient’s descriptions. Journal of the American Academy of Audiology 14 (7).

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

As part of any clinically robust protocol, a hearing aid fitting will be objectively verified with real-ear measures and validated with a speech-in-noise test. Fine tuning and follow-up adjustments are an equally important part of the fitting process. This stage of the routine fitting process does not follow standardized procedures and is almost always directed by a patient’s complaints or descriptions of real-world experience with the hearing aids. This can be a challenging dynamic for the clinician. Patients may have difficulty putting their auditory experience into words and different people may describe similar sound quality issues in different ways.  Additionally, there may be several ways to address any given complaint and a given programming adjustment may not have the same effect on different hearing aids.

Hearing aid manufacturers often include a fine-tuning guide or automated fitting assistant within their software to help the clinician make appropriate adjustments for common patient complaints. There are limitations to the effectiveness of these fine tuning guides in that they are inherently specific to a limited range of products and the suggested adjustments are subject to the expertise and resources of that manufacturer.  The manner in which a sound quality complaint is described may differ between manufacturers and the recommended adjustments in response to the complaint may differ as well.

There have been a number of efforts to develop a single hearing aid troubleshooting guide that could be used across devices and manufacturers (Moore et. al., 1998; Gabrielsson et al., 1979, 1988, 1990; Lundberg et al., 1992; Ovegard et al., 1997). The first and perhaps most challenging step toward this goal has been to determine the most common descriptors that patients use for sound quality complaints. Moore (1998) and his colleagues developed a procedure in which responses on three ratings scales (e.g., “loud versus quiet”, “tinny versus boomy”) were used to make adjustments to gain and compression settings. However, their procedure did not allow for the bevy of descriptors that patients create; limiting potential utility for everyday clinical settings.  Gabrielsson colleagues, in a series of Swedish studies, developed a set of reliable terms to describe sound quality. These descriptors have since been translated and used in English language research (Bentler et al., 1993).

As hearing instruments become more complicated with numerous adjustable parameters, and given the wide range of experience and expertise of individuals fitting hearing instruments today, an independent fine tuning guide is an appealing concept. Lorienne Jenstad and her colleagues proposed an “expert system” for troubleshooting hearing aid complaints.  The authors explained that expert systems “emulate the decision making abilities of human experts” (Tharpe et al., 1993).  To develop the system, two primary questions were asked:

1) What terms do hearing impaired listeners use to describe their reactions to specific hearing aid fitting problems?

2) What is the expert consensus on how these patient complaints can be addressed by hearing aid adjustment?

There were two phases to the project. To address question one, the authors surveyed clinicians for their reports on how patients describe sound quality with regard to specific fitting problems. To address question two, the most frequently reported descriptors from the clinicians’ responses were submitted to a panel of experts to determine how they would address the complaints.

The authors sent surveys to 1934 American Academy of Audiology members and received 311 qualifying responses. The surveys listed 18 open-ended questions designed to elicit descriptive terms that patients would likely use for hearing aid fitting problems. For example, the question “If the fitting has too much low-frequency gain…” yielded responses such as “hollow”, “plugged” and “echo”.  The questions probed common problems related to gain, maximum output, compression, physical fit, distortion and feedback.  The survey responses yielded a list of the 40 most frequent descriptors of hearing aid fitting problems, ranked according to the number of occurrences.

The list of descriptors was used to develop a questionnaire to probe potential solutions for each problem.  Each descriptor was put in the context of, “How would you change the fitting if your patient reports that ___?”, and 23 possible fitting solutions were listed.  These questionnaires were completed by a panel of experts with a minimum of five years of clinical experience. Respondents could offer more than one solution to a problem and the solutions were weighted based on the order in which they were offered. There was strong agreement among experts, suggesting that their responses could be used reliably to provide troubleshooting solutions based on sound quality descriptions. The expert responses also agreed with the initial survey that was sent to the group of 1934 audiologists, supporting the validity of these response sets.

The expert responses resulted in a fine-tuning guide in the form of tables or simplified flow charts. The charts list individual descriptors with potential solutions listed below in the order in which they should be attempted.  For example, below the descriptor “My ear feels plugged”, the first solution is to “increase vent” and the second is to “decrease low frequency gain”. The idea is that the clinician would first try to increase the vent diameter and if that didn’t solve the problem, they would move on to the second option, decreasing low frequency gain. If an attempted solution creates another sound quality problem, the table can be utilized to address that problem in the same way.

The authors correctly point out that there are limitations to this tool and that proposed solutions will not necessarily have the same results with all hearing aids. For instance, depending on the compressor characteristics, raising a kneepoint might increase OR decrease the gain at input levels below the kneepoint. It is up to the clinician to be familiar with a given hearing aid and its adjustable parameters to arrive at the appropriate course of action.

Beyond manipulation of the hearing aid itself, the optimal solution for a particular patient complaint might not be the first recommendation in any tuning guide. For instance, for the fitting problem labeled “Hearing aid is whistling”, the fourth solution listed in the table is “check for cerumen”.  This solution appeared fourth in the ranking based on the frequency of responses from the experts on the panel. However, any competent clinician who encounters a patient with hearing aid feedback should check for cerumen first before considering programming modifications.

The expert system proposed by Jenstad and her colleagues represents a thoroughly examined, reliable step toward development of a universal troubleshooting guide for clinicians. Their paper was published in 2003, so some items should be updated to suit modern hearing aids. For example, current feedback management strategies result in fewer and less challenging feedback problems.  Solutions for feedback complaints might now include, “calibrate feedback management system” versus gain or vent adjustments. Similarly, most hearing aids now have solutions for listening in noise that extend beyond the simple inclusion of directional microphones, so “directional microphone” might not be an appropriately descriptive solution to address complaints about hearing in noise, as the patient is probably already using a directional microphone.

Overall, the expert system proposed by Jenstad and colleagues is a helpful clinical tool; especially if positioned as a guide to help patients find the appropriate terms to describe their perceptions. However, as the authors point out, it is not meant to replace prescriptive methods, measures of verification and validation, or the expertise of the audiologist. The responsibility is with the clinician to be informed about current technology and its implications for real world hearing aid performance and to communicate with their patients in enough detail to understand their patients’ comments and address them appropriately.

References

Bentler, R.A., Nieburh, D.P., Getta, J.P. & Anderson, C.V. ( 1993). Longitudinal study of hearing aid effectiveness II: subjective measures. Journal of Speech and Hearing Research 36, 820-831.

Jenstad, L.M., Van Tasell, D.J. & Ewert, C. (2003). Hearing aid troubleshooting based on patient’s descriptions. Journal of the American Academy of Audiology 14 (7).

Moore, B.C.J., Alcantara, J.I. & Glasberg, B.R. (1998). Development and evaluation of a procedure for fitting multi-channel compression hearing aids. British Journal of Audiology 32, 177-195.

Gabrielsson A. ( 1979). Dimension analyses of perceived sound quality of sound-reproducing systems. Scandinavian Journal of Psychology 20, 159-169.

Gabrielsson, A., Hagerman, B., Bech-Kristensen, T. & Lundberg, G. (1990). Perceived sound quality of reproductions with different frequency responses and sound levels. Journal of the Acoustical Society of America 88, 1359-1366.

Gabrielsson, A. Schenkman, B.N. & Hagerman, B. (1988). The effects of different frequency responses on sound quality judgments and speech intelligibility. Journal of Speech and Hearing Research 31, 166-177.

Lundberg, G., Ovegard, A., Hagerman, B., Gabrielsson, A. & Brandstom, U. (1992). Perceived sound quality in a hearing aid with vented and closed earmold equalized in frequency response. Scandinavian Audiology 21, 87-92.

Ovegard, A., Lundberg, G., Hagerman, B., Gabrielsson, A., Bengtsson, M. & Brandstrom, U. (1997). Sound quality judgments during acclimatization of hearing aids. Scandinavian Audiology 26, 43-51.

Schweitzer, C., Mortz, M. & Vaughan, N. (1999). Perhaps not by prescription – but by perception. High Performance Hearing Solutions 3, 58-62.

Tharpe, A.M., Biswas, G. & Hall, J.W. (1993). Development of an expert system for pediatric auditory brainstem response interpretation. Journal of the American Academy of Audiology 4, 163-171.

Recommendations for fitting patients with cochlear dead regions

Cochlear Dead Regions in Typical Hearing Aid Candidates:

Prevalence and Implications for Use of High-Frequency Speech Cues

Cox, R.M., Alexander, G.C., Johnson, J. & Rivera, I. (2011).  Cochlear dead regions in typical hearing aid candidates: Prevalence and implications for use of high-frequency speech cues. Ear & Hearing 32 (3), 339-348.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Audibility is a well-known predictor of speech recognition ability (Humes, 2007) and audibility of high-frequency information is of particular importance for consonant identification.  Therefore, audibility of high-frequency speech cues is appropriately regarded as an important element of successful hearing aid fittings (Killion & Tillman, 1982; Skinner & Miller, 1983). In contrast to this expectation, some studies have reported that high-frequency gain might have limited or even negative impact on speech recognition abilities of some individuals (Murray & Byrne, 1986; Ching et al., 1998; Hogan & Turner, 1998). These researchers observed that when high-frequency hearing loss exceeded 55-60dB, some listeners were unable to benefit from increased high-frequency audibility.  A potential explanation for this variability was provided by Brian Moore (2001), who suggested that an inability to benefit from amplification in a particular frequency region could be due to cochlear “dead regions” or regions where there is a loss of inner hair cell functioning.

Moore suggested that hearing aid fittings could potentially be improved if clinicians were able to identify patients with cochlear dead regions (DRs). Working under the assumption that diagnosis DRs may contraindicate high-frequency amplification. He and his colleagues developed the TEN test as a method of determining the presence of cochlear dead regions (Moore et al., 2000, 2004). The advent of the TEN test provided a standardized measurement protocol for DRs, but there is still wide variability in the reported prevalence of DRs. Estimates range from as 29% (Preminger et a., 2005) to as high as 84% (Hornsby & Dundas, 2009), with other studies reporting DR prevalence somewhere in the middle of that range. Several potential factors are likely to contribute to this variability, including degree of hearing loss, audiometric configuration and test technique.

In addition to the variability in reported prevalence of DRs, there is also variability in the reports of how DRs affect the ability to benefit from high-frequency speech cues (Vickers et al., 2001; Baer et al., 2002; Mackersie et al., 2004). It remains unclear as to whether high-frequency amplification recommendations should be modified to reflect the presence of DRs.  Most research is in agreement that as hearing thresholds increase, the likelihood of DRs also increases.  Hearing aid users with severe to profound hearing losses are likely to have at least one DR. Because a large proportion of hearing aid users have moderate to severe hearing losses, Dr. Cox and her colleagues wanted to determine the prevalence of DRs in this population. In addition, they examined the effect of DRs on the use of high-frequency speech cues by individuals with moderate to severe loss.

Their study addressed two primary questions:

1) What is the prevalence of dead regions (DRs) among listeners with hearing thresholds in the 60-90dB range?

2) For individuals with hearing loss in the 60-90dB range, do those with DRs differ from those without DRs in their ability to use high-frequency speech cues?

One hundred and seventy adults with bilateral, flat or sloping sensorineural hearing loss were tested. All subjects had thresholds of 60 to 90dB in the better ear for at least part of the range from 1-3kHz and thresholds no better than 25dB for frequencies below 1kHz. Subjects ranged in age from 38 to 96 years, and 59% of the subjects had experience with hearing aids.

First, subjects were evaluated for the presence of DRs with the TEN test. Then, speech recognition was measured using high-frequency emphasis (HFE) and high-frequency emphasis, low-pass filtered (HFE-LP) stimuli from the QSIN test (Killion et al. 2004). HFE items on this test are amplified up to 32dB above 2.5kHz, whereas the HFE-LP items have much less gain in this range. Comparison of subjects’ responses to these two types of stimuli allowed the investigators to assess changes in speech intelligibility with additional high frequency cues. Presentation levels for the QSIN were chosen by using a loudness scale and bracketing procedure to arrive at a level that the subject considered “loud but okay”. Finally, audibility differences for the two QSIN conditions were estimated using the Speech Intelligibility Index based on ANSI 3.5-1997 (ANSI, 1997).

The TEN test results revealed that 31% of the participants had DRs at one or more test frequencies. Of the 307 ears tested, 23% were found to have a DR for one or more frequencies. Among those who tested positive for DRs, about 1/3 had DRs in both ears and 2/3 had DRs in one ear or the other in equal proportion. Mean audiometric thresholds were essentially identical for the two groups below 1kHz, but above 1kHz thresholds were significantly poorer for the group with DRs than for the group without DRs.  DRs were most prevalent at frequencies above 1.5kHz. There were no age or gender differences.

On the QSIN test, the mean HFE-LP scores were significantly poorer than the mean HFE scores for both groups.  There was also a significant difference in performance based on whether or not the participants had DRs. Perhaps more interestingly, there was a significant interaction between the DR group and test stimuli conditions, in that the additional high-frequency information in the HFE stimuli resulted in slightly greater performance gains for the group without DRs than it did for the group with DRs.  Furthermore, subjects with one or more isolated DRs were more able to benefit from the high frequency cues in the HFE lists than were those subjects with multiple, contiguous DRs. Although there were a few isolated individuals who demonstrated lower scores for the HFE stimuli, the differences were not significant and could have been explained by measurement error. Therefore, the authors conclude that the additional high frequency information in the HFE stimuli was not likely to have had a detrimental effect on performance for these individuals.

As had also been reported in previous studies, subject groups with DRs had poorer mean audiometric thresholds than the groups without DRs, so it was possible that audibility played a role in QSIN performance. Analysis of the audibility of QSIN stimuli for the two groups revealed that high frequency cues in the HFE lists were indeed more audible for the group without DRs. In accounting for this audibility effect, the presence of DRs still had a small but significant effect on performance.

The results of this study suggest that listeners with cochlear DRs still benefit from high frequency speech cues, albeit slightly less than those without dead regions.  The performance improvements were small and the authors caution that it is premature to draw firm conclusions about the clinical implications of this study.  Despite the need for further examination, the results of the current study certainly do not support any reduction in prescribed gain for hearing aid candidates with moderate to severe hearing losses.  The authors acknowledge, however, that because the findings of this and other studies are based on group data, it is possible that specific individuals may be negatively affected by amplification within dead regions. Based on the research to date, this seems more likely to occur in individuals with profound hearing loss who may have multiple, contiguous DRs.

More study is needed to determine the most effective clinical approach to managing cochlear dead regions in hearing aid candidates. Future research should be done with hearing aid users, including for example, the effects of noise on everyday hearing aid performance for individuals with DRs. A study by Mackersie et. al. (2004) showed that subjects with DRs suffered more negatives effects of noise than did the subjects without DRs. If there is a convergence of evidence to this effect, then recommendations about the use of high frequency gain, directionality and noise reduction could be determined as they relate to DRs. For now, Dr. Cox and her colleagues recommend that until there are clear criteria to identify individuals for whom high frequency gain could have deleterious effects, clinicians should continue using best-practice protocols and provide high frequency gain according to current prescriptive methods.

References

ANSI ( 1997). American National Standard Methods for Calculation of the Speech Intelligibility Index (Vol. ANSI S3.5-1997). New York: American National Standards Institute.

Ching,T., Dillon, H. & Byrne, D. (1998). Speech recognition of hearing-impaired listeners: Predictions from audibility and the limited role of high-frequency amplification. Journal of the Acoustical Society of America 103, 1128-1140.

Cox, R.M., Alexander, G.C., Johnson, J. & Rivera, I. (2011).  Cochlear dead regions in typical hearing aid candidates: Prevalence and implications for use of high-frequency speech cues. Ear & Hearing 32 (3), 339-348.

Hogan, C.A. & Turner, C.W. (1998). High-frequency audibility: Benefits for hearing-impaired listeners. Journal of the Acoustical Society of America 104, 432-441.

Humes, L.E. (2007). The contributions of audibility and cognitive factors to the benefit provided by amplified speech to older adults. Journal of the American Academy of Audiology 18, 590-603.

Killion, M. C. & Tillman, T.W. (1982). Evaluation of high-fidelity hearing aids. Journal of Speech and Hearing Research 25, 15-25.

Moore, B.C.J. (2001). Dead regions in the cochlear: Diagnosis, perceptual consequences and implications for the fitting of hearing aids. Trends in Amplification 5, 1-34.

Moore, B.C.J., Huss, M., Vickers, D.A.,  et al. (2000). A test for the diagnosis of dead regions in the cochlea. British Journal of Audiology 34, 2-5-224.

Moore, B.C.J., Glasberg, B.R., Stone, M.A. (2004). New version of the TEN test with calibrations in dB HL. Ear and Hearing 25, 478-487.

Murray, N. & Byrne, D. (1986). Performance of hearing-impaired and normal hearing listeners with various high-frequency cut-offs in hearing aids. Australian Journal of Audiology 8, 21-28.

Skinner, M.W. & Miller, J.D. (1983). Amplification bandwidth and intelligibility of speech in quiet and noise for listeners with sensorineural hearing loss.  Audiology 22, 253-279.

A preferred speech stimulus for testing hearing aids

Development and Analysis of an International Speech Test Signal (ISTS)

Holube, I., Fredelake, S., Vlaming, M. & Kollmeier, B. (2010). Development and analysis of an international speech test signal (ISTS). International Journal of Audiology, 49, 891-903.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Current hearing aid functional verification measures are described in the standards IEC 60118 and ANSI 3.22 and use stationary signals, including sine wave frequency sweeps and unmodulated noise signals. Test stimuli are presented to the hearing instrument and frequency specific gain and output is measured in a coupler or ear simulator.  Current standardized measurement methods require the instrument to be set at maximum or a reference test setting and adaptive parameters such as noise reduction and feedback management are turned off.

These procedures provide helpful information for quality assurance and determining fitting ranges for specific hearing aid models. However, because they were designed for linear, time-invariant hearing instruments, they have limitations for today’s nonlinear, adaptive instruments and cannot provide meaningful information about real-life performance in the presence of dynamically changing acoustic environments.

Speech is the most important stimulus encountered by hearing aid users and nonlinear hearing aids with adaptive characteristics process speech differently than they do stationary signals like sine waves and unmodulated noise. Therefore, it seems preferable for standardized test procedures to use stimuli that are as close as possible to natural speech.  Indeed, there are some hearing aid test protocols that use samples of natural speech or live speech. But natural speech stimuli will have different spectra, fundamental frequencies, and temporal characteristics depending on the speaker, the source material and the language. For hearing aid verification measures to be comparable to each other it is necessary to have standardized stimuli that can be used internationally.

Alternative test stimuli have been proposed based on the long-term average speech spectrum (Byrne et al., 1994) or temporal envelope fluctuations (Fastl, 1987). The International Collegium for Rehabilitative Audiology (ICRA) developed a set of stimuli (Dreschler, 2001) that reflect the long-term average speech spectrum and have speech-like modulations that differ across frequency bands.  ICRA stimuli have advantages over modulated noise and sine wave stimuli in that they share some similar characteristics with speech, but they lack speech-like comodulation characteristics (e.g., fundamental frequency). Furthermore, ICRA stimuli are often classified by signal processing algorithms as “noise” rather than “speech”, so they are less than optimal for measuring how hearing aids process speech.

The European Hearing Instrument Manufacturers Association (EHIMA) is developing a new measurement procedure for nonlinear, adaptive hearing instruments and an important part of their initiative is development of a standardized test signal or International Speech Test Signal (ISTS).  The development and analysis of the ISTS was described in a paper by Holube, et al. (2010).

There were fifteen articulated requirements for the ISTS, based on available test signals and knowledge of natural speech, the most clinically salient of which are:

  • The ISTS should resemble normal speech but should be non-intelligible.
  • The ISTS should be based on six major languages, representing a wide range of phonological structures and fundamental frequency variations.
  • The ISTS should be based on female speech and should deviate from the international long-term average speech spectrum (ILTASS) for females by no more than 1dB.
  • The ISTS should have a bandwidth of 100 to 16,000Hz and an overall RMS level of 65dB.
  • The dynamic range should be speech-like and comparable to published values for speech (Cox et al., 1988; Byrne et al., 1994).
  • The ISTS should contain voiced and voiceless components. Voiced components should have a fundamental frequency characteristic of female speech.
  • The ISTS should have short-term spectral variations similar to speech (e.g., formant transitions).
  • The ISTS should have modulation characteristics similar to speech (Plomp, 1984).
  • The ISTS should contain short pauses similar to natural running speech.
  • The ISTS stimulus should have a 60 second duration, from which other durations can be derived.
  • The stimulus should allow for accurate and reproducible measurements regardless of signal duration.

Twenty-one female speakers of six different languages (American English, Arabic, Mandarin, French, German and Spanish) were recorded while reading a story, the text and translations of which came from the Handbook of the International Phonetic Association (IPA).  One recording from each language was selected based on a number of criteria including voice quality, naturalness and median fundamental frequency. The recordings were filtered to meet the ILTASS characteristics described by Byrne et al. (1994) and were then split into 500ms segments that roughly corresponded to individual syllables. These syllable-length segments were attached in pseudo-random order to generate sections of 10 or 15 milliseconds. Each of the resulting sections could be combined to generate different durations of the ISTS stimulus and no single language was used more than once in any 6-segment section.  Speech interval and pause durations were analyzed to ensure that ISTS characteristics would closely resemble natural speech patterns.

For analysis purposes, a 60-second ISTS stimulus was created by concatenation of 10- and 15-second sections.  This ISTS stimulus was measured and compared to natural speech and ICRA-5 stimuli based on several criteria:

  • Long-term average speech spectrum (LTASS)
  • Short term spectrum
  • Fundamental frequency
  • Proportion of voiceless segments
  • Band-specific modulation spectra
  • Comodulation characteristics
  • Pause and speech duration
  • Dynamic range (spectral power level distribution)

On all of the analysis criteria, the ISTS stimulus resembled natural speech stimuli as well or better than ICRA-5 stimuli. Notable improvements for the ISTS over the ICRA-5 stimulus were its comodulation characteristics and dynamic range of 20-30dB, as well as pauses and combinations of voiced and voiceless segments that more closely resembled the distributions in natural speech.  Overall, the ISTS was deemed an appropriate speech-like stimulus proposal for the new standard measurement protocol.

Following the detailed analysis, the ISTS stimulus was used to measure four different hearing instruments, which were programmed to fit a flat, sensorineural hearing loss of 60dBHL.  Each instrument was nonlinear with adaptive noise reduction, compression and feedback management characteristics. The first-fit algorithms from each manufacturer were used, with all microphones fixed to an omnidirectional mode.  Instead of yielding gain and output measurements across frequency for one input level, the results showed percentile dependent gain (99th, 65th and 30th) across frequency as referenced to the long-term average speech spectrum.  The percentile dependent gain values provided information about nonlinearity, in that the softer components of speech were represented by the 30th percentile, moderate and loud speech components were represented by the 65th and 99th percentiles, respectively.  Relations between these three percentiles represented the differences in gain for soft, moderate and loud sounds.

The measurement technique described by Holube and colleagues, using the ISTS stimulus, offers significant advantages over current measurement protocols with standard sine wave or noise stimuli. First and perhaps most importantly, it allows hearing instruments to be programmed to real-life settings with adaptive signal processing features active. It measures how hearing aids process a stimulus that very closely resembles natural speech, so clinical verification measures may provide more meaningful information about everyday performance. By showing changes in percentile gain values across frequency, it also allows compression effects to be directly visible and may be used to evaluate noise reduction algorithms as well. The authors also note that the acoustic resemblance of ISTS to speech with its lack of linguistic information may have additional applications for diagnostic testing, telecommunications or communication acoustics.

The ISTS is currently available in some probe microphone equipment and will likely be introduced in most commercially available equipment over the next few years. Its introduction brings a standardized speech stimulus, for the testing of hearing aids, to the clinic. An important component of clinical best practice is the measurement of a hearing aid’s response characteristics. This is most easily accomplished through insitu probe microphone measurement in combination with a speech test stimulus such as the ISTS.

References

American National Standards Institute (ANSI ). ANSI S3.22-2003. Specification of hearing aid characteristics. New York: Acoustical Society of America.

Byrne, D., Dillon, H., Tran, K., Arlinger, S. & Wibraham, K. (1994). An international comparison of long0term average speech spectra. Journal of the Acoustical Society of America, 96(4), 2108-2120.

Cox, R.M., Matesich, J.S. & Moore, J.N. (1988). Distribution of short-term rms levels in conversational speech. Journal of the Acoustical Society of America, 84(3), 1100-1104.

Dreschler, W.A., Verschuure, H., Ludvigsen, C. & Westerman, S. (2001). ICRA noises: Artificial noise signals with speech-like spectral and temporal properties for hearing aid assessment. Audiology, 40, 148-157.

Fastl, H. (1987). Ein Storgerausch fur die Sprachaudiometrie. Audiologische Akustik, 26, 2-13.

Holube, I., Fredelake, S., Vlaming, M. & Kollmeier, B. (2010). Development and analysis of an international speech test signal (ISTS). International Journal of Audiology, 49, 891-903.

International Electrotechnical Commission, 1994, IEC 60118-0. Hearing Aids: Measurement of electroacoustical characteristics, Bureau of the International Electrotechnical Commission, Geneva, Switzerland.

IPA, 1999. Handbook of the International Phonetic Association. Cambridge University Press.

Plomp, R. (1984). Perception of speech as a modulated signal. In M.P.R. van den Broeche, A. Cohen (eds), Proceedings of the 10th International Congress of Phonetic Sciences, Utrecht, Dordrecht: Foris Publications, 29-40.

 

 

 

Will placing a receiver in the canal increase occlusion?

The influence of receiver size on magnitude of acoustic and perceived measures of occlusion.

Vasil-Dilaj, K.A., & Cienkowski, K.M. (2010). The influence of receiver size on magnitude of acoustic and perceived measures of occlusion. American Journal of Audiology 20, 61-68.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

The occlusion effect, an increase in bone conducted sound when the ear canal is occluded, is a consideration for many hearing aid fittings.  The hearing aid shell or earmold restricts the release of low-frequencies from the ear canal (Revit, 1992), resulting in an increase in low-frequency sound pressure level at the eardrum, sometimes up to 25dB (Goldstein & Hayes, 1965; Mueller & Bright, 1996; Westermann, 1987).  Hearing aid users suffering from occlusion will complain of an “echo” or “hollow” quality to their voices and hearing their own chewing can be particularly annoying. Indeed, perceived occlusion is reported to be a common reason for dissatisfaction with hearing aids (Kochkin, 2000).

Occlusion from a hearing aid shell or earmold is usually managed by increasing vent diameter or decreasing the length of the vent in order to decrease the acoustic mass of the vent (Dillon, 2001; Kiessling, et al, 2005). One potential risk of increasing vent diameter is increased risk of feedback, but this problem has been alleviated by improvements in feedback cancellation. Better feedback management has also resulted in more widespread use of open fit, receiver-in-canal (RIC) instruments which have proven effective in reducing measured and perceived occlusion (Dillon, 2001; Kiessling et al., 2005; Kiessling et al., 2003; Vasil & Cienkowski, 2006).

Though open fit BTE hearing instruments are designed to be acoustically transparent, some open fittings still result in perceived occlusion.  Interestingly, perceived occlusion is not always strongly or even significantly correlated with measured acoustic occlusion (Kiessling et al., 2005; Kuk et al., 2005; Kampe & Wynne, 1996), so it is apparent that other factors do contribute to the perception of occlusion.  The size of the receiver and/or eartip, as well as the size of the ear canal, affect the amount of air flow in and out of the ear canal and it seems likely that these factors could affect the amount of acoustic and perceived occlusion.

Thirty adults, 17 men and 13 women, participated in the study. All had normal hearing, unremarkable otoscopic examinations and normal tympanograms. Two measures of ear canal volume were obtained: volume estimates from the tympanometry screener and estimates determined from earmold impressions that were sent to a local hearing aid manufacturer.  Participants were fitted binaurally with RIC hearing instruments.  Instead of domes used clinically with RIC instruments flexible receiver sleeves designed specifically for research purposes were used.  Use of the special receiver sleeves allowed the researchers to increase the overall circumference of the receiver systematically so that six receiver size conditions could be evaluated:  no receiver, receiver only (with a circumference of 0.149 in.), 0.170 in., 0.190 in., 0.210in. and 0.230 in.

Real-ear unoccluded and occluded measures were obtained with subjects vocalizing the vowel /i/. Subjects monitored the level of their vocalizations via a sound level meter. Real ear occlusion effect (REOE) was determined by subtracting the SPL levels for the unoccluded response from the occluded response (REOR-REUR = REOE).  Subjective measures were obtained by asking subjects to rate their perception of occlusion on a five point scale ranging from “no occlusion” to “complete occlusion”. To avoid bias in the occlusion ratings, participants were not allowed to view the hearing aids or receiver sleeves until after testing was completed.

Results indicated that measured acoustic occlusion was very low for all conditions, especially below 500Hz, where it was below 2dB for most of the receiver conditions. For frequencies above 500Hz, REOE increased as receiver size increased. The no receiver and receiver only conditions had the least amount of measured occlusion and the largest receiver sizes had the most. There was no significant interaction between receiver size and frequency.

Perceived occlusion also increased as receiver size increased and though it was mild for most participants in most of the conditions, for the largest receiver condition, some participants rated occlusion as severe. Perceived occlusion was not significantly correlated with measured acoustic occlusion for low frequencies, and the two measures were only weakly correlated for frequencies between 700-1500Hz.

There was no significant relationship between either measure of ear canal volume and perceived or acoustic measures of occlusion. However, adequate ear canal volume to accommodate all receiver sizes was an inclusion criterion for the study, so the authors suggest that smaller ear canal volume could still be a factor in perceived or acoustic occlusion and may warrant further study.

The results of the current investigation show that occlusion was minimal for most of the receiver sizes. These findings are in agreement with previous studies of vented hollow molds, completely open IROS shells (Vasil & Cienkowski, 2006), large 2.4mm vents and silicone ear tips (Kiessling et al, 2005). REOEs for the two largest receivers matched results for a hollow mold with 1mm vent (Kuk et al, 2009) and the REOEs for the two smallest receivers matched results for hollow molds with 2mm and 3mm vents (Kuk et al, 2009).  The authors also point out that there was minimal insertion loss for all conditions. Insertion loss from closed earmolds can amount to 20dBHL (Sweetow, 1991) and can contribute to a perception of occlusion or poor voice quality.  The relative lack of insertion loss is yet another potential advantage of open and RIC fittings.

Perception of occlusion did increase with the size of the receiver, but overall differences were small. This is in agreement with prior research suggesting that reduction of air flow out of the ear canal results in more low-frequency energy in the ear canal (Revit, 1992), which can cause an increase in occlusion (Dillon, 2001). The authors point out that although subjects were not able to see the receivers prior to insertion, they were probably aware of the size and weight differences and could have been influenced by the perception of a larger object in the ear as opposed to actual occlusion. This may also be the case for hearing aid users, perhaps particularly so for individuals with smaller or tortuous ear canals.

The occlusion effect can be challenging, especially when anatomical or other constraints result in the use of minimal venting for individuals with good low-frequency hearing. The results reported here suggest that acoustic occlusion with RIC instruments is slight and may not always be related to perceived occlusion. Therefore, a client’s perception of “hollow” voice quality, “echoey” sound quality or a plugged sensation may be the most reliable indication of occlusion and the most important determinant of eartip size or venting characteristics. The administration of an occlusion rating scale or other self-evaluation techniques may also prove helpful in evaluating occlusion and its impact on overall hearing aid satisfaction.

References

Dillon, H. (2001). Hearing aids. New York, NY: Thieme.

Goldstein, D.P.,  & Hayes, C.S. (1965). The occlusion effect in bone conduction hearing.  Journal of Speech and Hearing Research 8, 137-148.

Kampe, S.D., & Wynne, M.K. ( 1996). The influence of venting on the occlusion effect. The Hearing Journal 49(4), 59-66.

Kiessling, J., Brenner, B., Jespersen, C.T., Groth, J., & Jensen, O.D. (2005). Occlusion effect of earmolds with different venting systems. Journal of the American Academy of Audiology, 16, 237-249.

Kiessling. J., Margolf-Hackl, S., Geller, S., & Olsen, S.O. (2003). Researchers report on a field test of a non-occluding hearing instrument. The Hearing Journal , 56(9), 36-41.

Kochkin, S. (2000). MarkeTrak V: Why my hearing aids are in the drawer: The consumer’s perspective. The Hearing Journal 53 (2), 34-42.

Kuk, F.K. , Keenan, D., & Lau, C.C. (2005). Vent configurations on subjective and objective occlusion effect. Journal of the American Academy of Audiology 16, 747-762.

Mueller, H.G., & Bright, K.E. (1996). The occlusion effect during probe microphone measurements. Seminars in Hearing 17 (1), 21-32.

Revit, L. (1992). Two techniques for dealing with the occlusion effect. Hearing Instruments 43 (12), 16-18.

Sweetow, R. W. (1991). The truth behind “non-occluding” earmolds. Hearing Instruments 42 (1), 25.

Vasil, K.A., & Cienkowski, K.M. (2006). Subjective and objective measures of the occlusion effect for open-fit hearing aids. Journal of the Academy of Rehabilitative Audiology 39, 69-82.

Vasil-Dilaj, K.A., & Cienkowski, K.M. (2010). The influence of receiver size on magnitude of acoustic and perceived measures of occlusion. American Journal of Audiology 20, 61-68.

Westermann, V.H. (1987). The occlusion effect. Hearing Instruments, 38 (6), 43.

Does Expansion Decrease Low Level Speech Understanding?

Effects of Expansion on Consonant Recognition and Consonant Audibility

Brennan, M., & Souza, P. (2009). Effects of expansion on consonant recognition and consonant audibility. Journal of the American Academy of Audiology 20, 119-127.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors. 

The primary goal of a hearing aid fitting is to improve audibility and availability of speech sounds while maintaining comfort and loudness tolerance.  A linear hearing aid fitting may provide audibility for average speech sounds but may result in discomfort or for loud sounds and inaudibility for soft sounds. The use of wide-dynamic range compression (WDRC) has addressed these issues, helping maximize the useful dynamic range of hearing for individuals who require amplification for quiet and moderate sounds, yet have limited tolerance for loud sounds.

One potential issue with WDRC has been the increased audibility of very soft environmental sounds, which may be unwelcome for individuals who have adjusted to long-term hearing loss and are not used to perceiving these sounds. An additional problem for individuals with good residual hearing at some frequencies is that they will hear circuit noise from the hearing aid itself. Both of these issues can be unpleasant for the listener, possibly resulting in rejection or limited use of the hearing aids.

Expansion makes hearing aids quieter at low input levels. This is done in almost all modern hearing aids in order to reduce annoying environmental or circuit noise.  There is concern, however, that if too aggressive, expansion can have a detrimental effect on speech intelligibility (Plyler et al., 2005). It has been proposed that reduced speech recognition ability with expansion is due to reduced audibility of speech cues (Walker et al, 1984; Plyler et al, 2007).

Previous examinations of expansion have measured its effect on audibility of room noise (Plyler et al, 2005) or the long-term average speech spectrum or LTASS (Zakis and Wise, 2007) but did not directly measure the effect of expansion on audibility of the speech signals.  The current authors sought more specific insight into the effect of expansion on speech recognition by studying the relationship between expansion and consonant audibility.

Though there may be other related parameters warranting examination, the primary variables of interest relating to expansion are the ratio and the kneepoint.  In theory, a high expansion kneepoint should have a negative effect on speech recognition, because gain for stimuli below the kneepoint is reduced, resulting in decreased audibility.  Speech presented above the expansion kneepoint should be less affected by the expansion.

Therefore, the hypotheses for Brennan and Souza’s study were as follows:

1. A high expansion kneepoint will significantly reduce consonant-vowel (CV) recognition.

2. A high expansion kneepoint will significantly reduce CV audibility.

3. The effect of expansion on speech recognition and audibility will be reduced for increased speech input levels.

4. There will be a significant positive correlation between CV recognition and audibility for each condition.

Thirteen hearing-impaired individuals participated in the experiment. Nine were experienced hearing aid users; the remaining four did not use hearing aids. Subjects were fitted monaurally with a multi-channel, digital, behind-the-ear hearing aid.  Venting was 3mm for most subjects, but was reduced to 1mm for two subjects and plugged for one subject. 

The hearing aids were set to DSL 4.1 targets and had three separate programs:

1. Multichannel WDRC with an expansion kneepoint of 50dB SPL (high kneepoint condition)

2. Multichannel WDRC with an expansion kneepoint of 30dB SPL (low kneepoint condition)

3. Linear amplification with output compression limiting (control condition)

Expansion ratio was constant at 0.7:1, which represents a typcial expansion ratio currently available in hearing aids.

Eight CV nonsense syllables, four voice and four unvoiced, from the Nonsense Syllable Test (Dubno and Dirks, 1982) were presented to subjects at 50, 60 and 71dB SPL. Recordings of aided stimuli were measured at the tympanic membrane for each subject (Souza and Tremblay, 2006) and signal audibility was determined using the Aided Audibility Index -AAI (Stelmachowicz et al, 1994) using modifications for hearing-impaired subjects as described by Souza and Turner (1998).

Three of Brennan and Souza’s hypotheses were confirmed:  high expansion kneepoints significantly reduced signal audibility for speech at all presentation levels and consonant-vowel recognition scores were significantly lower for the high kneepoint condition, especially at presentation levels of 50dB and 60dB SPL. Subsequent regression analyses revealed that CV syllable recognition scores were significantly associated with audibility. The authors’ presumption that the effect of expansion on audibility and speech recognition would decrease with increasing speech presentation levels was not confirmed.  Instead, expansion had negative effects on CV recognition and audibility at all presentation levels.  This was in contrast with previous work reporting that expansion did not affect speech recognition above certain levels (Walker et al, 1984; Bray and Ghent, 2001; Plyler et al, 2005a, 2007), but this discrepancy may be explained by differences in presentation level, speech materials, expansion ratio or time constants or other hearing aid settings.

Despite some variability in results across studies, it is clear that high expansion kneepoints result in decreased speech recognition scores, presumably due in part to decreased audibility. Other potential explanations involve degradation of temporal cues and disruption of the intensity relationships between consonants and vowels, which provide important cues for consonant recognition (Walker et al, 1984; Hedrick and Younger, 2007).

Expansion is a feature of modern hearing aids that is often misunderstood; because it is characterized in terms of a ratio and kneepoint, it may be easily confused with compression.  Essentially the opposite of compression, expansion results in less amplification for softer sounds than louder sounds.  The expansion kneepoint is often the same as the compression kneepoint, indicating that expansion occurs below the kneepoint level and compression occurs above it. Alternatively, the input/output function of a circuit might show a region of linearity between the expansion and compression kneepoints.

Regardless of its various characteristics, expansion may help reduce the perception of unwanted environmental and hearing aid circuit noise, resulting in improved subjective hearing aid performance.  However, because audibility and speech recognition are two primary goals of amplification, it is essential to ensure that expansion does not result in decreased objective performance.  Brennan and Souza suggest that the use of active noise reduction for lower level stimuli may provide the benefits of expansion without the negative effect on speech recognition, but more research on this topic is warranted. Because expansion is commonly used in current hearing instruments, it is important for audiologists to understand the principles of compression and expansion so that appropriate settings can be selected to maximize audibility and comfort for individual hearing aid users.  And as always, counseling is essential for preparing both new and experienced hearing aid users for adjustment to new hearing aid technology and the perception of normal environmental sounds.

References

Brennan, M. & Souza, P. (2009). Effects of expansion on consonant recognition and consonant audibility. Journal of the American Academy of Audiology 20: 119-127.

Dubno, J.R. & Dirks, D.D. (1982). Evaluation of hearing-impaired listeners using a Nonsense-Syllable Test: I test reliability. Journal of Speech, Language and Hearing Research 25: 135-141.

Hedrick, M.S. & Younger, M.S. (2007). Perceptual weighting of stop consonant cues by normal and impaired listeners in reverberation versus noise. Journal of Speech, Language and Hearing Research 50: 254-269.

Plyler, P., Hill, A., & Trine, T. D. (2005a). The effects of expansion on the objective and subjective performance of hearing instrument users. Journal of the American Academy of Audiology, 16, 101-113.

Plyler, P.N., Lowery, K.J., Hamby, H.M. & Trine, T.D. (2007). The objective and subjective evaluation of multichannel expansion in wide dynamic range compression hearing instruments. Journal of Speech, Language and Hearing Research 50: 15-24.

Souza, P.E. & Tremblay, K.L. (2006). New perspectives on assessing amplification effects. Trends in Amplification 10: 119-143.

Souza, P.E. & Turner, C.W. (1998). Multichannel compression, temporal cues and audibility. Journal of Speech, Language and Hearing Research 41: 315-326.

Stelmachowicz, P.G., Lewis, D.E., Kalberer, L. & Creutz, T. (1994). Situational Hearing Aid Response Profile User’s Manual (SHARP, v 6.0). Omaha: Boys Town National Research Hospital.

Walker, G., Byrne, D., & Dillon, H. (1984). The effects of multichannel compression/expansion amplification on the intelligibility of nonsense syllables in noise. Journal of the Acoustical Society of America 76: 746-757.

Zakis, J.A. & Wise, C. (2007). The acoustic and perceptual effects of two noise suppression algorithms. Journal of the Acoustical Society of America 121: 433-441.