Starkey Research & Clinical Blog

Hearing Aids Alone can be Adjusted to Help with Tinnitus Relief

Shekhawat, G.S., Searchfield, G.D., Kobayashi, K. & Stinear, C. (2013). Prescription of hearing aid output for tinnitus relief. International Journal of Audiology 2013, early online: 1-9.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

The American Tinnitus Association (ATA) reports that approximately 50 million people in the United States experience some degree of tinnitus.About one third of tinnitus sufferers consider it severe enough to seek medical attention. Fortunately only a small proportion of tinnitus sufferers experience symptoms that are debilitating enough that they feel they cannot function normally. But even if it does not cause debilitating symptoms, for many tinnitus still causes a number of disruptive effects such as sleep interference, difficulty concentrating, anxiety, frustration and depression  (Tyler & Baker, 1983; Stouffer & Tyler, 1990; Axelsson, 1992; Meikle 1992; Dobie, 2004).

Therapeutic treatments for tinnitus include the use of tinnitus maskers, tinnitus retraining therapy, biofeedback and counseling . Though these methods provide relief for many the tendency for tinnitus to co-occur with sensorineural hearing loss (Hoffman & Reed, 2004) leads the majority of individuals to attempt management of tinnitus with the use of hearing aids alone (Henry, et al., 2005; Kochkin & Tyler, 2008; Shekhawat et al., 2013).  There are a number of benefits that hearing aids may offer for individuals with tinnitus:  audiological counseling during the fitting process may provide the individual with a better understanding of hearing loss and tinnitus (Searchfield et al., 2010); hearing aids may reduce the stress related to struggling to hear and understand; amplification of environmental sound may reduce perceived loudness of tinnitus (Tyler, 2008).

Prescriptive hearing aid fitting procedures are designed to improve audibility and assist hearing loss rather than address tinnitus concerns. Yet the majority of studies show that hearing aids alone can be useful for tinnitus management (Shekhawat et al., 2013). The Better Hearing Institute reports that approximately 28% of hearing aid users achieve moderate to substantial tinnitus relief with hearing aid use (Tyler, 2008). Approximately 66% of these individuals said their hearing aids offered tinnitus relief most or all the time and 29% reported that their hearing aids relieved their tinnitus all the time. However, little is known about how hearing aids should be adjusted to optimize this apparent relief from tinnitus. In a study comparing DSL I/O v4.0 and NAL-NL1, Wise (2003) found that low compression kneepoints in the DSL formula reduced tinnitus awareness for 80% of subjects, but these settings also made environmental sounds more annoying. Conversely, they had higher word recognition scores with NAL-NL1 but did not receive equal tinnitus reduction. The proposed explanation for this was the increased low-intensity, low-frequency gain of the DSL I/O formula versus the increased high frequency emphasis of NAL-NL1. Based on these findings, the author suggested the use of separate programs for regular use and for tinnitus relief.

Shekhawat and his colleagues began to address the issue of prescriptive hearing aid fitting for tinnitus by studying how output characteristics should be tailored to meet the needs of hearing aid users with tinnitus.  Specifically, they examined how modifying the high frequency characteristics of the DSL v5 (Scollie et al., 2005) prescription would affect subjects’ short term tinnitus perception.  Speech files with variable high frequency cut-offs and gain settings were designed and presented to subjects in matched pairs to arrive at the most favorable configuration for tinnitus relief.

Twenty-five participants mild to moderate high-frequency sensorineural hearing loss were recruited for participation. None of the participants had used hearing aids before but all indicated interest in trying hearing aids to alleviate their tinnitus.  All subjects had experienced chronic, bothersome tinnitus for at least two years and the average perception of tinnitus loudness was 62.6 on a scale from 1-100, where 1 is very faint and 100 is very loud. Subjects had a mean Tinnitus Functional Index (TFI; Meikle et al., 2012) score of 39.30. Six participants reported unilateral tinnitus localized to the left side, 15 had bilateral tinnitus and 4 reported tinnitus that localized to the center of the head, which is likely to be present bilaterally though not necessarily symmetrical.  The majority (40%) of the subjects reported their tinnitus quality as tonal, whereas 28% described it as noise, 20% as crickets and 12% as a combination of sound qualities. Tinnitus pitch matching was conducted using pairs of tones in which subjects were repeatedly asked to indicate which of the tones more closely matched the pitch of their tinnitus. The average matched tinnitus pitch was 7.892kHz with a range from 800Hz to 14.5kHz. When asked to describe the pitch of their tinnitus, most subjects defined it as “very high pitched”, some said “high pitched” and some said “medium pitched”.

There were 13 speech files, based on sentences spoken by a female talker, with variable high frequency characteristics. There were three cut-off frequencies (2, 4 and 6kHz) and four high frequency gain settings (+6, +3, -3 and -6dB). Stimuli were presented via a master hearing aid with settings programmed to match DSL I/O v5.0 prescriptive targets for each subject’s hearing loss.  Pairs of sentences were presented in a round robin tournament procedure  and subjects were asked to choose which one interfered most with their tinnitus and made it less audible. A computer program tabulated the number of “wins” for each sentence and collapsed the information across subjects to determine a “winner”, or the sentence that was most effective at reducing tinnitus audibility.  Real-ear measures were used to compare DSL v5 prescribed settings with the characteristics of the winning sentence and outputs were recorded from 250Hz to 6000Hz.

The most preferred output for interfering with tinnitus perception was a 6dB reduction at 2kHz, which was chosen by 26.47% of the participants.  A 6dB reduction at 4kHz was preferred by 14.74% of the subjects, followed by a 3dB reduction at 2kHz, which was preferred by 11.76%.  There were no significant differences between the preferences for any of these settings.

They found that when tinnitus pitch was lower than 4kHz, the preferred setting had lower output than DSL v5 across the frequency range. The difference was small (1-3dB) and became smaller as tinnitus pitch increased. When tinnitus pitch was between 4-8kHz, subjects preferred slightly less output than DSL v5 for high frequencies and slightly more output for low frequencies, though these differences were minimal as well. When tinnitus pitch was higher than 8kHz, participants preferred output that was slightly greater than DSL v5 at three frequencies: 750Hz, 1kHz and 6kHz. From these results a trend emerged: as tinnitus pitch increased, preferred output became lower than DSL v5 though the differences were not statistically significant.

Few studies investigating the use of hearing aids for tinnitus management have considered the perceived pitch of the tinnitus or the prescriptive method of the hearing aids (Shekhawat et al., 2013). The results of this study suggest that DSL v5 could be an effective prescriptive formula for hearing aids used in a tinnitus treatment plan, though the pitch of the individual’s tinnitus might affect the optimal output settings. In general, they found that the higher the tinnitus pitch, the more the preferred output matched with DSL I.O v5.0 targets. This study agrees with an earlier report by Wise (2003) in which subjects preferred DSL v5 over NAL-NL1 for interfering with and reducing tinnitus. It is unknown how NAL-NL2 targets would fare in a similar comparison, though the NAL-NL2 formula may provide more tinnitus relief than its predecessor because it tends to prescribe slightly higher gain for low frequencies and lower compression ratios which could potentially provide more of a masking effect from environmental sounds. The NAL-NL2 formula should be studied as it pertains to tinnitus management, perhaps along with consideration of other factors including degree of loss, gender and prior experience with hearing aids, since these affect the targets prescribed by the updated formula (Keidser & Dillon, 2006; Keidser et al., 2008). The subjects in the present study had similar degrees of loss and all lacked prior experience with amplification; the NAL-NL2 formula takes these factors into consideration, prescribing slightly different gain based on degree of loss or for those who have used hearing aids before.

The authors recommend offering separate hearing aid programs for use when the listener desires tinnitus relief. Most fitting formulae are designed to optimize speech intelligibility and audibility, and based on previous reports, an individual might prefer one formula when speech understanding and communication is their top priority, and may prefer another, used with or without an added noise masker, when their tinnitus is bothering them.

They also propose that tinnitus pitch matching should be considered when programming hearing aids, though there is often quite a bit of variability in results and testing needs to be repeated several times to increase reliability.  Still, their study agrees with prior work in suggesting that the pitch of the tinnitus affects how likely hearing aids are to reduce it and whether output adjustments can impact how effective the hearing aids are to this end. Schaette (2010) found that individuals with tinnitus pitch lower than 6kHz showed more reduction of tinnitus with hearing aid use than did subjects whose pitch was higher than 6kHz. This makes sense because of the typical bandwidth of hearing aids, in which most gain is delivered below this frequency range. Not surprisingly, another study reported that hearing aids were most effective at reducing tinnitus when the pitch of the tinnitus was within the frequency response range of the hearing aids (McNeil et al., 2012).  Though incorporating tinnitus pitch matching into a clinical protocol might seem daunting or time consuming, it is probably possible to use an informal bracketing procedure, similar to one used for MCLs, to get an idea of the individual’s tinnitus pitch range. Testing can be repeated at subsequent visits to eventually arrive at a more reliable estimate.  If pitch matching measures are not possible, clinicians can question the patient about their perceived tinnitus pitch range and, with reference the current study, adjust outputs in the 2kHz to 4kHz range to determine if the individual experiences improvement in tinnitus relief.

Proposed are a series of considerations for fitting hearing instruments on tinnitus sufferers and for employing dedicated tinnitus programs:

- noise reduction should be disabled;

- fixed activation of omnidirectional microphones introduce more environmental noise;

- in contrast to the previous recommendation, full-time activation of directional microphones will increase the hearing aid noise floor;

- lower compression knee points increase amplification for softer sounds;

- expansion should be turned off to increase amplification of low-level background sound;

- efforts should be made to  minimize occlusion, which can emphasize the perception of tinnitus;

- ensuring physical comfort of the devices can minimize the user’s general awareness of their ears and the hearing aids, potentially reducing their attention to the tinnitus as well (Sheldrake & Jastreboff, 2004; Searchfield, 2006);

- user controls are important as they allow access to alternate hearing aid programs and sound therapy options.

Dr. Shekhawat and his colleagues also underscore the importance of counseling tinnitus sufferers who choose hearing aids. Clinicians need to ensure that these patients have realistic expectations about the potential benefits of hearing aids and that they know the devices will not cure their tinnitus. Follow-up care is especially important to determine if adjustments or further training is necessary to improve the performance of the aids for all of their intended purposes.

Currently, little is known about how to optimize hearing aid settings for tinnitus relief and there are no prescriptive recommendations targeted specifically for tinnitus sufferers. Shekhawat and his colleagues propose that the DSL v5 formula may be an appropriate starting point for these individuals, as their basic program and/or in an alternate program designated for use when their tinnitus is particularly bothersome.  Most importantly, however, are the observations that intentional manipulation of parameters common to most hearing aid fittings may increase likelihood of tinnitus relief with hearing aid use. Further investigation into the optimization of these fitting parameters may reveal a prescriptive combination that audiologists can leverage to benefit individuals with hearing loss who also seek relief from the stress and annoyance of tinnitus.

 

References

American Tinnitus Association (ATA) reporting data from the 1999-2004 National Health and Nutrition Examination Survey (NHANES), conducted by the Centers for Disease Control and Prevention (CDC). www.ata.org, retrieved 9-10-13.

Axelsson, A. (1992). Conclusion to Panel Discussion on Evaluation of Tinnitus Treatments. In J.M. Aran & R. Dauman (Eds) Tinnitus 91. Proceedings of the Fourth International Tinnitus Seminar (pp. 453-455). New York, NY: Kugler Publications.

Cornelisse, L.E., Seewald, R.C. & Jamieson, D.G. (1995). The input/output formula: A theoretical approach to the fitting of personal amplification devices. Journal of the Acoustical Society of America 97, 1854-1864.

Dobie, R.A. (2004). Overview: Suffering From Tinnitus. In J.B. Snow (Ed) Tinnitus: Theory and Management (pp.1-7). Lewiston, NY: BC Decker Inc.

Henry, J.A., Dennis, K.C. & Schechter, M.A. (2005). General review of tinnitus: Prevalence, mechanisms, effects and management. Journal of Speech, Language and Hearing Research 48, 1204-1235.

Hoffman, H.J. & Reed, G.W. (2004). Epidemiology of tinnitus. In: J.B. Snow (ed.) Tinnitus: Theory and Management. Hamilton, Ontario: BC Decker.

Keidser, G. & Dillon, H. (2006). What’s new in prescriptive fittings down under? In: Palmer, C.V., Seewald, R. (Eds.), Hearing Care for Adults 2006. Phonak AG, Stafa, Switzerland, pp. 133-142.

Keidser, G., O’Brien, A., Carter, L., McLelland, M. & Yeend, I. (2008). Variation in preferred gain with experience for hearing aid users. International Journal of Audiology 47(10), 621-635.

Kochkin, S. & Tyler, R. (2008). Tinnitus treatment and effectiveness of hearing aids: Hearing care professional perceptions. Hearing Review 15, 14-18.

McNeil, C., Tavora-Vieira, D., Alnafjan, F., Searchfield, G.D. & Welch, D. (2012). Tinnitus pitch, masking and the effectiveness of hearing aids for tinnitus therapy. International Journal of Audiology 51, 914-919.

Meikle, M.B. (1992). Methods for Evaluation of Tinnitus Relief Procedures. In J.M. Aran & R. Dauman (Eds.) Tinnitus 91: Proceedings of the Fourth International Tinnitus Seminar (pp. 555-562). New York, NY: Kugler Publications.

Meikle, M.B., Henry, J.A., Griest, S.E., Stewart, B.J., Abrams, H.B., McArdle, R., Myers, P.J., Newman, C.W., Sandridge, S., Turk, D.C., Folmer, R.L., Frederick, E.J., House, J.W., Jacobson, G.P., Kinney, S.E., Martin, W.H., Nagler, S.M., Reich, G.E., Searchfield, G., Sweetow, R. & Vernon, J.A. (2012). The Tinnitus Functional Index:  Development of a new clinical measure for chronic, intrusive tinnitus. Ear & Hearing 33(2), 153-176.

Moffat, G., Adjout, K., Gallego, S., Thai-Van, H. & Collet, L. (2009). Effects of hearing aid fitting on the perceptual characteristics of tinnitus. Hearing Research 254, 82-91.

Schaette, R., Konig, O., Hornig, D., Gross, M. & Kempter, R. (2010). Acoustic stimulation treatments against tinnitus could be most effective when tinnitus pitch is within the stimulated frequency range. Hearing Research 269, 95-101.

Shekhawat, G.S., Searchfield, G.D., Kobayashi, K. & Stinear, C. (2013). Prescription of hearing aid output for tinnitus relief. International Journal of Audiology 2013, early online: 1-9.

Shekhawat, G.S., Searchfield, G.D. & Stinear, C.M. In press (2013). Role of hearing aids in tinnitus intervention: A scoping review. Journal of the American Academy of Audiology.

Searchfield, G.D. (2006). Hearing aids and tinnitus. In: R.S. Tyler (ed). Tinnitus Treatment, Clinical Protocols. New York: Thieme Medical Publishers, pp. 161-175.

Searchfield, G.D., Kaur, M. & Martin, W.H. (2010). Hearing aids as an adjunct to counseling: Tinnitus patients who choose amplification do better than those that don’t. International Journal of Audiology 49, 574-579.

Sheldrake, J.B. & Jastreboff, M.M. (2004). Role of hearing aids in management of tinnitus. In: J.B. Sheldrake, Jr. (ed.) Tinnitus: Theory and Management. London: BC Decker Inc, pp. 310-313.

Stouffer, J.L. & Tyler, R. (1990). Characterization of tinnitus by tinnitus patients. Journal of Speech and Hearing Disorders 55, 439-453.

Tyler, R.S.(Ed). (2008). The Consumer Handbook on Tinnitus. Auricle Ink Publishers., Sedona, AZ.

Tyler, R. & Baker, L.J. (1983). Difficulties experienced by tinnitus sufferers. Journal of Speech and Hearing Disorders 48, 150-154.

Wise, K. (2003). Amplification of sound for tinnitus management: A comparison of DSL i/o and NAL-NL1 prescriptive procedures and the influence of compression threshold on tinnitus audibility. Section of Audiology, Auckland: University of Auckland.

 

Hearing Aid Behavior in the Real World

Banerjee, S. (2011). Hearing aids in the real world: typical automatic behavior of expansion, directionality and noise management. Journal of the American Academy of Audiology 22, 34-48.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Hearing aid signal processing offers proven advantages for many everyday listening situations. Directional microphones improve speech recognition in the presence of competing sounds and noise reduction decreases annoyance of surrounding noise while possibly improving ease of listening (Sarampalis et al., 2009). Expansion reduces the annoyance of low-level environmental noise as well as circuit noise from the hearing aid.  It is typical for modern hearing aids to offer automatic activation of signal processing features based on various information derived through acoustic analysis of the environment. In the case of some signal processing features, these can be assigned to independent, manually accessible hearing aid memories. The opportunity to manually activate a hearing aid feature allows patients to make conscious decisions about the acoustic conditions of the environment and access an appropriately optimized memory configuration (Keidser, 1996; Surr et al., 2002).

However, many hearing aid users who need directionality and noise reduction may be unable to manually adjust their hearing aids, due to physical limitations or an inability to determine the optimal setting for a situation. Other users may be reluctant to make manual adjustments for fear of drawing attention to the hearing aids and therefore the hearing impairment. Cord et al (2002) reported that as many as 23% of users with manual controls do not use their additional programs and leave the aids in a default mode at all times. Most hearing aids now offer automatic directionality and noise reduction, taking the responsibility for situational adjustments away from the user. This allows more hearing aid users the ability to experience advanced signal processing benefits and reduces the need for manual adjustments.

The decision to provide automatic activation of expansion, directionality, and noise reduction is based on their known benefits for particular acoustic conditions, but it is not well understood how these features interact with each other or with changing listening environments in every day use.  This poses a challenge to clinicians when it comes to follow-up fine-tuning, because it is impossible to determine what features were activated at any particular moment. Datalogging offers opportunity to better interpret a patient’s experience outside of the clinic or laboratory. Datalogging reports often include average daily or total hours of use as well as the proportion of time an individual has spent in quiet or noisy environments but these are general reports and do not provide insight into the activation of some signal processing features and the acoustic environment that occurred at the time of feature activation. For example, a clinician may be able to determine that an aid was in a directional mode 20% of the time and that the user spent 26% of their time listening to speech in the presence of noise, but it does not indicate whether directional processing was active during these exposures to speech in noise. Therefore, the clinician must rely on user reports and observations to determine the appropriate adjustments, which may not reliably represent the array of listening experiences and acoustic environments that were encountered (Wagener, 2008).

In the study discussed here, Banerjee investigated the implementation of automatic expansion, directionality and noise management features. She measured environmental sound levels to determine the proportion of time individuals spent in quiet and noisy environments, as well as how these input levels related to activation of automatic features. She also examined bilateral agreement across a pair of independently functioning hearing aids to determine the proportion of time that the aids demonstrated similar processing strategies.

Ten subjects with symmetrical, sensorineural hearing loss were fitted with bilateral, behind-the-ear hearing aids. Age ranged from 49-78 years with a mean of 62.3 years of age. All of the subjects were experienced hearing aid users.  Some subjects were employed and most participated in regular social activities with family and other groups. The hearing aids were 8-channel WDRC instruments programmed to match targets from the manufacturer’s proprietary fitting formula.  Activation of the automatic directional microphone required input levels of 60dB or above, with the presence of noise in the environment and speech located in front of the wearer. Automatic noise management resulted in gain reductions in one or more of the 8 channels, based on the presence of noise-like sounds classified as “wind, mechanical sounds or other sounds” based on their spectral and temporal characteristics. No gain reductions were applied for sounds classified as “speech”.  Expansion was active for inputs below the compression thresholds, which ranged from 54 to 27dB SPL.

All participants carried a Personal Digital Assistants (PDA) connected via programming boots to their hearing aids. This PDA logged environmental broadband input level as well as the status of expansion, directionality, noise management and channel-specific gain reduction. Participants were asked to wear the hearing aids connected to the PDA for as much of the day as possible and measurements were made in 5-sec intervals to allow time for hearing aid features to update several times between readings.  The PDAs were worn with the hearing aids for a period of 4-5 weeks and at the end of data collection a total of 741 hours of hearing aid use were logged and studied.

Examination of the input level measurements revealed that subjects spent about half of their time in quiet environments with input levels of 50dB SPL or lower. Less than 5% of their time was spent in environments with input levels exceeding 65dB and the maximum recorded input level was 105dB SPL. This concurs with previous studies that reported high proportions of time spent in quiet environments such as living rooms or offices (Walden et al., 2004; Wagener et al., 2008).  The interaural difference in input level was 1dB about 50% of the time and exceeded 5dB only 5% of the time. Interaural differences were attributed to head shadow effects and asymmetrical sound sources as well as occasional accidental physical contact with the hearing aids, such as adjusting eyeglasses or rubbing the pinna.

Expansion was analyzed in terms of the proportion of time it was activated and whether the aids were in bilateral agreement. Expansion thresholds are meant to approximate low-level speech presented at 50dB.  In this study, expansion was active between 42% and 54% of the time, which is consistent with its intended activation, because about half the time the input levels were at or below 50dB SPL.  Bilateral agreement was relatively high at 77-81%.

Directional microphone status was measured according to the proportion of time that directionality was active and whether there was bilateral agreement. Again, directional status was consistent with the broadband input level measurements, in that directionality was active only about 10% of the time. The instruments were designed to switch to directional mode only when input levels were higher than 60dBA, and the broadband input measurements showed that participants encountered inputs higher than 65dB only about 5% of the time. Bilateral agreement for directionality was very high at 97%. Interestingly, the hearing aids were in directional mode only about 50% of the time in the louder environments.  This is likely attributable to the requirement for not only high input levels but also speech located in front of the listener in the presence of surrounding noise. A loud environment alone should not trigger directionality without the presence of speech in front of the listener.

Noise reduction was active 21% of the time with bilateral agreement of 95%. Again, this corresponds well with the input level measurements because noise reduction is designed to activate only in levels exceeding 50dB SPL. This does not indicate how often it was activated in the presence of moderate to loud noise, but as input levels rose, gain reductions resulting from noise management steadily increased as well. Gain reduction was 3-5dB greater in channels below 2250Hz than in the high frequency channels, consistent with the idea that environmental noise contains more energy in the low frequencies. Interaural differences in noise management were very small with a median difference in gain reduction of 0dB in all channels and exceeding 1dB only 5% of the time.

Bilateral agreement was generally quite high. Conditions in which there was less bilateral agreement may reflect asymmetric sound sources, accidental physical contact with the hearing instruments or true disagreement based on small differences in input levels arriving at the two ears. There may be everyday situations in which hearing aids might not perform in bilateral agreement, but this is not necessarily a disadvantage to the user. For instance, a driver in a car might experience directionality in the left aid but omnidirectional pickup from the right aid. This may be advantageous for the driver if there is another occupant in the passenger’s seat. Similarly, at a restaurant a hearing aid user might experience disproportionate noise or multi-talker babble from one side, depending on where he is situated relative to other people. Omnidirectional pickup on the quieter side of the listener with directionality on the opposite side might be desirable and more conducive to conversation. Similar arguments could be proposed for asymmetrical activation of noise management and its potential effects on comfort and ease of listening in noisy environments.

Banerjee’s investigation is an important step toward understanding how hearing aid signal processing is activated in everyday conditions. Though datalogging helps provide an overall snapshot of usage patterns and listening environments, the gross reporting of data limits utility in fine-tuning of hearing aid parameters. This study, and others like it, will provide useful information for clinicians providing follow-up care with hearing aid users.

It is noteworthy that participants spent about 50% of their time in environments with 50dB of broadband input or lower. While some participants were employed and others were not, this remains an acoustic reality of the hearing aid wearer. Subsequent studies with targeted samples would help further determine how special features apply to everyday environments among participants that lead a more consistently active lifestyle.

Automatic, adaptive signal processing features have potential benefits for many hearing aid users, especially those who are unable to or prefer not to operate manual controls. However, proper recommendations and programming adjustments can only be made if clinicians understand how these features are implemented in everyday life. This study provides evidence that some features perform as designed and offers insight for clinicians to leverage when making fine-tuning instruments based on real world hearing aid behavior.

 

References

Banerjee, S. (2011). Hearing aids in the real world: typical automatic behavior of expansion, directionality and noise management. Journal of the American Academy of Audiology 22, 34-48.

Cord, M., Surr, R., Walden, B. & Olsen, L. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology 13, 295-307.

Keidser, G. (1996). Selecting different amplification for different listening conditions. Journal of the American Academy of Audiology 7, 92-104.

Sarampalis, A., Kalluri, S., Edwards, B. & Hafter, E. (2009). Objective measures of listening effort: effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research 52, 1230–1240.

Surr, R., Walden, B., Cord, M. & Olsen, L. (2002). Influence of environmental factors on hearing aid microphone preference. Journal of the American Academy of Audiology 13, 308-322.

Wagener, K., Hansen, M. & Ludvigsen, C. (2008). Recording and classification of the acoustic environment of hearing aid users. Journal of the American Academy of Audiology 19, 348-370.

Does lip reading take the effort out of speech understanding?

Picou, E.M., Ricketts, T.A. & Hornsby, B.W.Y. (2013). How hearing aids, background noise and visual cues influence objective listening effort. Ear and Hearing, in press.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

For many people with hearing loss, visual cues from lip-reading are a valuable cue that has been proven to improve speech recognition across a variety of listening conditions (Sumby & Pollock, 1954; Erber, 1975; Grant, et al., 1998). To date is has remained unclear how visual cues, background noise, and hearing aid use interact with each other to affect listening effort.

Listening effort is often described as the allocation of additional cognitive resources to the task of understanding speech. If cognitive resources are finite or limited, then two or more simultaneous tasks will be in competition with each other for cognitive resources. Decrements in performance on one task can be interpreted as an allocation of resources away from the task and toward another concurrent task. Therefore, listening effort is often measured with dual-task paradigms, in which listeners respond to speech stimuli while simultaneously performing another task or responding to another kind of stimulus. Allocation of cognitive resources in this way is thought to represent a competition for working memory resources (Baddeley & Hitch, 1974; Baddeley, 2000).

The Ease of Language Understanding (ELU) model states that the process of understanding language involves matching phonological, syntactic, semantic and prosodic information to stored templates in long-term memory. When there is a mismatch between the incoming sensory information and the stored template, additional effort must be exerted to resolve the ambiguity of the message. This additional listening effort taxes working memory resources and may require the listener to allocate fewer resources to other tasks. Several studies have identified conditions that degrade a speech signal, such as background noise (Murphy, et al., 2000; Larsby et al., 2005; Zekveld et al., 2010) and hearing loss (Rabbitt, 1990; McCoy et al., 2005) in a manner that increases listening effort.

Individuals with reduced working memory capacity may be more negatively affected by conditions that degrade a speech signal. Previous reports have suggested that differences in working memory capacity hold a relationship to speech recognition in noise and performance with hearing aids in noise (Lunner, 2003; Foo et al., 2007).  The speed of retrieval from long-term memory may also affect performance and listening effort in adverse listening conditions (Van Rooij et al., 1989; Lunner, 2003). Because sensory inputs decay rapidly (Cowan, 1984), listeners with slow processing speed might not be able to fully process incoming information and match it to long term memory stores before it decays. Therefore, they would have to allocate more effort and resources to the task of matching sensory input to long-term memory templates.

Just as some listener traits might be expected to increase listening effort, some factors might offset adverse listening conditions by providing more information to support the matching of incoming sensory inputs to long-term memory. The use of visual cues is well known to improve speech recognition performance and some studies indicate that individuals with large working memory capacities are better able to make use of visual cues from lipreading (Picou et al., 2011).  Additionally, listeners who are better lipreaders may require fewer cognitive resources to understand speech, allowing them to make better use of visual cues in noisy environments (Hasher & Zacks, 1979; Picou et al., 2011).

The purpose of Picou, Ricketts and Hornsby’s study was to examine how listening effort is affected by hearing aid use, visual cues and background noise. A secondary goal of the study was to determine how specific listener traits such as verbal processing speed, working memory and lipreading ability would affect the measured changes in listening effort.

Twenty-seven hearing-impaired adults participated in the study. All were experienced hearing aid users and had corrected binocular vision of 20/40 or better. Participants were fitted with bilateral behind-the-ear hearing aids with non-occluding, non-custom eartips. Advanced features such as directionality and noise reduction were turned off, though feedback management was left on in order to maximize usable gain. Hearing aids were programmed with linear settings to eliminate any potential effect of amplitude compression on listening effort, a relationship which is as of yet unestablished.

A dual-task paradigm with a primary speech recognition task and secondary visual reaction time task was used to measure listening effort. The speech recognition task used monosyllabic words spoken by a female talker (Picou, 2011), presented at 65dB in the presence of multi-talker babble. Prior to the speech recognition task, individual SNRs for auditory only (AO) and auditory-visual (AV) conditions were determined at levels that yielded performance between 50-70% correct, because scores in this range are most likely to show changes in listening effort (Gatehouse & Gordon, 1990).

The reaction time task required participants to press a button in response to a rectangular visual probe that occurred prior to presentation of the speech token. The visual probe was presented prior to the speech tokens, so that the probe would not distract from the use of visual cues during the speech recognition task. The visual and speech stimuli were presented within a narrow enough interval (less than 500 msec) so that cognitive resources would have to be expended for both tasks at the same time (Hick & Tharpe, 2002).

Three listener traits were examined with regard to listening effort in quiet and noisy conditions, with and without visual cues. Visual working memory was evaluated with the Automated Operation Span (AOSPAN) test (Unsworth et al., 2005). The AOSPAN requires subjects to solve math equations and memorize letters. After seeing a math equation and identifying the answer, subjects are shown a letter which disappears after 800 msec. Following a series of equations they are then asked to identify the letters that they saw, in the order that they appeared. Scores are based on the number of letters that are recalled correctly.

Verbal processing speed was assessed with a lexical decision task (LDT) in which subjects were presented with a string of letters and were asked to indicate, as quickly as possible, if the letters formed a real word.  The test consisted of 50 common monosyllabic English words and 50 monosyllabic nonwords. The task reflects verbal processing speed because it requires the participant to match the stimuli to representations of familiar words stored in long-term memory (Meyer & Schvaneveldt, 1971; Milberg & Blumstein, 1981; Van Rooij et al., 1989). The reaction time to respond to the stimuli was used as a measure of verbal processing speed.

Finally, lipreading ability was measured with the Revised Shortened Utley Sentence Lipreading Test (ReSULT; Updike, 1989). The test required participants to repeat sentences spoken by a female talker, when the talker’s face was visible but speech was inaudible. Responses were scored based on the number of words repeated correctly in each sentence.

Subjects participated in two test sessions. At the first session, vision and hearing was tested, individual SNR levels were determined for the speech recognition task and AOSPAN, LDT and ReSULT scores were obtained.  At the second session, subjects completed practice sequences with AO and AV stimuli, then the dual speech recognition and visual reaction time tests were administered in eight counterbalanced conditions listed below. Due to the number of experimental conditions, only select outcomes of this study will be reviewed.

1.         auditory only in quiet, unaided

2.         auditory only in noise, unaided

3.         auditory-visual in quiet, unaided

4.         auditory-visual in noise, unaided

5.         auditory only in quiet, aided

6.         auditory only in noise, aided

7.         auditory-visual in quiet, aided

8.         auditory-visual in noise, aided

The main analysis showed that background noise impaired performance in all conditions and hearing aid use and visual cues improved performance. However, there were significant interactions between hearing aid use and visual cues, hearing aids and background noise, and visual cues and background noise, indicating that the effect of hearing aid use depended on the test modality (AV or AO), and background noise (present or absent), and the effect of visual cues depended on background noise (present or absent).  Hearing aid benefit proved to be larger in AO conditions than in AV conditions and was larger in quiet conditions than in noisy conditions.  The effect of noise was greater in the AV conditions than in the AO conditions, but the authors suggest that this could have been related to the individualized SNRs chosen for the test procedure.

On the reaction time task, background noise increased listening effort and hearing aid use reduced listening effort, though there was high variability and the effects of both variables were small. Additional analysis determined that the individual SNRs chosen for the dual task did not affect the hearing aid benefits that were measured. The availability of visual cues did not change overall reaction times and it was therefore determined that visual cues did not affect listening effort in this task of reaction time.

With regard to listening effort benefits derived from hearing aid use, the performance in quiet conditions was strongly related to performance in noise. In other words, subjects who obtained benefit from hearing aid use in quiet also obtained benefit in noise and individuals with slower verbal processing speed were more likely to derive benefit from hearing aid use. With regard to visual cues, there were several correlations with listener traits. Subjects who were better lipreaders derived more benefit from visual cues and those with smaller working memory capacities also showed more benefit from visual cues. These correlations were significant in quiet and noisy conditions. For quiet conditions, there was a positive correlation between verbal processing speed and benefit from visual cues, with better verbal processors showing more benefit from visual cues. There were no correlations between background noise and any of the measured listener traits.

The overall findings that visual cues and hearing aid use had positive effects and background noise had a negative effect on speech perception performance were not surprising. Similarly, the findings that hearing aid benefit was reduced for AV conditions versus AO conditions and for noisy versus quiet conditions were consistent with previous reports (Cox & Alexander, 1991; Walden et al., 2001; Duquesnoy & Plomp, 1983).  Because hearing aid use improves audibility, visual cues might not have been needed as much as they were in unaided conditions and the presence of noise may have counteracted the improved audibility by masking a portion of the speech cues needed for correct understanding, especially with the omnidirectional, linear instruments used in this study.

The ability of hearing aids to decrease listening effort was significant, in keeping with previously published results, but the improvements were lesser than than those reported in some previous studies. This could be related to the non-simultaneous timing of the tasks in the dual-task paradigm, but the authors surmise that it could be related to the way their subjects’ hearing aids were programmed. In most previous studies, individuals used their own hearing aids, set to individually prescribed and modified settings. In the current study, all participants used the same hearing aid circuit set to linear, unmodified targets. Advanced features like directionality and noise reduction, which are likely to impact listening effort (Sarampalis, 2009), speech discrimination ability and perceived ease of listening in everyday situations, were turned off.

There was a significant relationship between verbal processing speed and hearing aid benefit, in that subjects with slower processing speed were more likely to benefit from hearing aid use.  Sensory input decays rapidly and requires additional cognitive effort when it is mismatched with long-term memory stores. Any factor that improves the sensory input may facilitate the matching process. The authors posited that slow verbal processors might benefit more from amplification because hearing aids improved the quality of the sensory input, thereby reducing the cognitive effort and time that would otherwise be required to match the input to long-term memory templates.

On average, the availability of visual cues did not have a significant effect on listening effort. This may be a surprising result given the well-known positive effects of visual cues for speech recognition. However, there was high variability among subjects and it was apparent that better lipreaders were more able to make use of visual cues, especially in quiet conditions without hearing aids. Working memory capacity was negatively correlated with benefit from visual cues, indicating that subjects with better working memory capacity derived less benefit from visual cues on average. The relationship between these variables is unclear, but the authors suggest that individuals with lower working memory capacities may be more susceptible to changes in listening effort and therefore more likely to benefit from additional sensory information such as visual cues.

Understanding how individual traits affect listening effort and susceptibility to noise is important to audiologists for a number of reasons, partly because we often work with older individuals. Working memory declines as a result of the normal aging process and may begin in middle age (Wang, et al., 2011).  Similarly, the speed of cognitive processing slows and visual impairment becomes more likely with increasing age (Clay, et al., 2009). Many patients seeking audiological care may also suffer from these deficits in working memory, verbal processing, and visual acuity. Though more research is needed to understand how these variables relate to one another, they should be considered in clinical evaluations and hearing aid fittings.  Advanced hearing aid features that counteract the degrading effects of noise and reverberation may be particularly important for elderly or visually impaired hearing aid users. As shown in the reviewed study, these patients will benefit significantly from face-to-face conversation, slow speaking rates and reduced environmental distractions. Counseling sessions should include discussion of these issues so that patients and family members understand how they can use strategic listening techniques, in addition to hearing aids, to improve speech recognition and reduce cognitive effort.

References

Clay, O., Edwards, J., Ross, L., Okonkwo, O., Wadley, V., Roth, D. & Ball, K. (2009). Visual function and cognitive speed of processing mediate age-related decline in memory span and fluid intelligence. Journal of Aging and Health 21(4), 547-566.

Cox, R.M. & Alexander, G.C. (1991).  Hearing aid benefit in everyday environments. Ear and Hearing 12, 127-139.

Downs, D.W. (1982). Effects of hearing aid use on speech discrimination and listening effort. Journal of Speech and Hearing Disorders 47, 189-193.

Duquesnoy, A.J. & Plomp, R. (1983). The effect of a hearing aid on the speech reception threshold of hearing impaired listeners in quiet and in noise. Journal of the Acoustical Society of America 73, 2166-2173.

Erber, N.P. (1975). Auditory-visual perception of speech. Journal of Speech and Hearing Disorders 40, 481-492.

Foo, C., Rudner, M. & Ronnberg, J. (2007). Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. Journal of the American Academy of Audiology 18, 618-631.

Gatehouse, S., Naylor, G. & Elberling, C. (2003). Benefits from hearing aids in relation to the interaction between the user and the environment. International Journal of Audiology 42 Suppl 1, S77-S85.

Gatehouse, S. & Gordon, J. (1990). Response times to speech stimuli as measures of benefit from amplification. British Journal of Audiology 24, 63-68.

Grant, K.W., Walden, B.F. & Seitz, P.F. (1998).  Auditory visual speech recognition by hearing impaired subjects. Consonant recognition, sentence recognition and auditory-visual integration. Journal of the Acoustical Society of America 103, 2677-2690.

Hick, C.B. & Tharpe, A.M. (2002). Listening effort and fatigue in school-age children with and without hearing loss. Journal of Speech, Language and Hearing Research 45, 573-584.

Hornsby, B.W.Y. (2013).  The Effects of Hearing Aid Use on Listening Effort and Mental Fatigue Associated with Sustained Speech Processing Demands. Ear and Hearing, in press.

Meyer, D.E. & Schvaneveldt, R.W. (1971). Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology 90, 227-234.

Milberg, W. & Blumstein, S.E. (1981). Lexical decision and aphasia: Evidence for semantic processing. Brain and Language 14, 371-385.

Picou, F.M., Ricketts, T.A. & Hornsby, B.W.Y (2011). Visual cues and listening effort: Individual variability. Journal of Speech, Language and Hearing Research 54, 1416-1430.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W.Y. (2013). How hearing aids, background noise and visual cues influence objective listening effort. Ear and Hearing, in press.

Rudner, M., Foo, C. & Ronnberg, J. (2009). Cognition and aided speech recognition in noise: Specific role for cognitive factors following nine week experience with adjusted compression settings in hearing aids. Scandinavian Journal of Psychology 50, 405-418.

Sarampalis, A., Kalluri, S., Edwards, B. & Hafter, E. (2009) Objective measures of listening effort: effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research 52, 1230–1240.

Sumby, W.H. & Pollock, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26, 212-215.

Unsworth, N., Heitz, R.P. & Schrock, J.C. (2005). An automated version of the operation span task. Behavioral Research Methods 37, 498-505.

Van Rooij, J.C., Plomp, R. & Orlebeke, J.F. (1989).  Auditive and cognitive factors in speech perception by elderly listeners. I: Development of test battery. Journal of the Acoustical Society of America 86, 1294-1309.

Walden, B.F., Grant, K.W. & Cord, M.T. (2001). Effects of amplification and speechreading on consonant recognition by persons with impaired hearing. Ear and Hearing 22, 333-341.

Wang, M., Gamo, N., Yang, Y., Jin, L., Wang, X., Laubach, M., Mazer, J., Lee, D. & Arnsten, A. (2011). Neuronal basis of age-related working memory decline. Nature 476, 210-213.

Can Aided Audibility Predict Pediatric Lexical Development?

Stiles, D.J., Bentler, R.A., & McGregor, K.K. (2012). The speech intelligibility index and the pure-tone average as predictors of lexical ability in children fit with hearing aids. Journal of Speech Language and Hearing Research, 55, 764-778.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Despite advances in early hearing loss identification, hearing aid technology, and fitting and verification tools, children with hearing loss consistently demonstrate limited lexical abilities compared to children with normal hearing.  These limitations have been illustrated by poorer performance on tests of vocabulary (Davis et al., 1986), word learning (Gilbertson & Kamhi, 1995; Stelmachowicz et al., 2004), phonological discrimination, and non-word repetition (Briscoe et al., 2001; Delage & Tuller, 2007; Norbury, et al., 2001).

There are a number of variables that may predict hearing-impaired children’s performance on speech and language tasks, including the age at which they were first fitted with hearing aids and the degree of hearing loss.  Moeller (2000) found that children who received earlier aural rehabilitation intervention demonstrated significantly larger receptive vocabularies than those who received later intervention.  Degree of hearing loss, which is typically defined in studies by the pure-tone average (PTA) or the average of pure-tone hearing thresholds at 500Hz, 1000Hz, and 2000Hz (Fletcher, 1929), has been significantly correlated with speech recognition (Davis et al., 1986; Gilbertson & Kamhi, 1995), receptive vocabulary (Fitzpatrick et al., 2007; Wake et al., 2005), expressive grammar, and word recognition (Delage & Tuller, 2007) in some studies comparing hearing-impaired children to those with normal hearing.

In contrast, other studies have reported that pure-tone average (PTA) did not predict language ability in hearing-impaired children.  Davis et al. (1986) tested hearing-impaired subjects between five and18 years of age and found no significant relationship between PTA and vocabulary, verbal ability, reasoning, and reading.  However, all subjects scored below average on these measures, regardless of their degree of hearing loss.  Similarly, Moeller (2000) found that age of intervention affected vocabulary and verbal reasoning, but PTA did not.  Gilbertson and Kamhi (1995) studied novel word learning in hearing-impaired children ranging in age from  seven to 10 years and found that neither PTA nor unaided speech recognition threshold was correlated to receptive vocabulary level or word learning.

At a glance, it seems likely that degree of hearing loss should affect language development and ability, because hearing loss affects audibility, and speech must be audible in order to be processed and learned.  However, the typical PTA of thresholds at 500Hz, 1000Hz, and 2000Hz does not take into account high-frequency speech information beyond 2000Hz.  Some studies using averages of high-frequency pure-tone thresholds (HFPTA) have found a significant relationship between degree of loss and speech recognition (Amos & Humes, 2007; Glista et al., 2009).

Because most hearing-impaired children now benefit from early identification and intervention, their pure-tone hearing threshold averages (PTA or HFTPA) might not be the best predictors of speech and language abilities in every-day situations.  Rather, a measure that combines degree of hearing loss as well as hearing aid characteristics might be a better predictor of speech and language ability in hearing-impaired children.  The Speech Intelligibility Index (SII; ANSI,2007), a measure of audibility that computes  the importance of different frequency regions based on the phonemic content of a given speech test, has proven to be predictive of performance on speech perception tasks for adults and children (Dubno et al., 1989; Pavlovic et al., 1986; Stelmachowicz et al., 2000).  Hearing aid gain characteristics can be incorporated into the SII algorithm to yield an aided SII, which has been reported to predict performance on word repetition (Magnusson et al., 2001) and nonsense syllable repetition ability in adults (Souza & Turner, 1999).  Because an aided SII includes the individual’s hearing loss and hearing aid characteristics into the calculations, it better represents how audibility affects an individual’s daily functioning.

The purpose of the current study was to evaluate the aided SII as a predictor of performance on measures of word recognition, phonological working memory, receptive vocabulary, and word learning.  Because development in these areas establishes a base for later achievements in language learning and reading (Tomasello, 2000; Stanovich, 1986), it is important to determine how audibility affects lexical development in hearing-impaired children.  Though the SII is usually calculated based on the particular speech test to be studied, the current investigation used aided SII values based on average speech spectra.  The authors explained that vocabulary acquisition is a cumulative process, and they intended to use the aided SII as a measure of cumulative, rather than test-specific, audibility.

Sixteen hearing-impaired children with hearing aids (CHA) and 24 children with normal hearing (CNH) between six and nine years of age participated in the study.  All of the hearing-impaired children had bilateral hearing loss and had used amplification for at least one year.  All participants used spoken language as their primary form of communication.  Real-ear measurements were used to calculate the aided SII at user settings.  Because the goal was to evaluate the children’s actual audibility as opposed to optimal audibility, their current user settings were used in the experiment whether or not they met DSL prescriptive targets (Scollie et al., 2005).

Subjects participated in tasks designed to assess four lexical domains.  Word recognition was measured by the Lexical Neighborhood Test and Multisyllabic Lexical Neighborhood Test (LNT and MLNT; Kirk & Pisoni, 2000).  These tests each contain “easy” and “hard” lists, based on how frequently the words occur in English and how many lexical neighbors they have.  Children with normal lexical development are expected to show a gradient in performance with the best scores on the easy MLNT and poorest scores on the hard LNT.  Non-word repetition was measured by a task prepared specifically for this study, using non-words selected based on adult ratings of “wordlikeness”.  In the word recognition and non-word repetition tasks, children were simply asked to repeat the words that they heard.  Responses were scored according to the number of phonemes correct for both tasks.  Additionally, the LNT and MLNT tests were scored based on number of words correct.  Receptive vocabulary was measured by the Peabody Picture Vocabulary Test (PPVT-III; Dunn & Dunn, 1997) in which the children were asked to view four images and select the one that corresponds to the presented word.  Raw scores are determined as the number of items correctly identified and norms are applied based on the subject’s age.  Novel word learning was assessed using the same stimuli from the non-word repetition task, after the children were given sentence context and visual imagery to teach them the “meaning” of the novel words.  Their ability to learn the novel words was evaluated in two ways: a production task in which they were asked to say the word when prompted by a corresponding picture and an identification task in which they were presented with an array of four items and were asked to select the item that corresponded to the word that was presented.

On the word recognition tests, the children with hearing aids (CHA) demonstrated poorer performance than the children with normal hearing (CNH) for measures of word and phoneme accuracy, though both groups demonstrated the expected gradient, with performance improving in parallel fashion from the hard LNT test through the easy MLNT test.  There was a correlation between aided SII and word recognition scores, but PTA and aided SII were equally good at predicting performance.

On the non-word repetition task, which requires auditory perception, phonological analysis, and phonological storage (Gathercole, 2006), CHA again demonstrated significantly poorer performance than CNH, and CNH performance was near ceiling levels.  PTA and aided SII scores were correlated with non-word repetition scores.  Beyond the effect of PTA, it was determined that aided SII accounted for 20% of the variance on the non-word repetition task, which was statistically significant.

The receptive vocabulary test yielded similar results; CHA performed significantly worse than CNH and both PTA and aided SII accounted for a significant proportion of the variance.

The only variable that predicted performance on the word learning tasks was age, which only yielded a significant effect on the word production task.  On the word identification task, both the CHA and CNH groups scored only slightly better than chance and there were no significant effects of group or age.

As was expected in this study, children with hearing aids (CHA) consistently showed poorer performance than children with normal hearing (CNH), with the exception of the novel word learning task.  The pattern of results suggests that aided audibility, as measured by the aided SII, was better at predicting performance than degree of hearing loss as measured by PTA.  Greater aided SII scores were consistently associated with more accurate word recognition, more accurate non-word repetition, and larger receptive vocabulary.

Although PTA or HFPTA may represent the degree of unaided hearing loss, because the aided SII score accounts for the contribution of the individual’s hearing aids, it is likely a better representation of speech audibility and auditory perception in everyday situations.  The authors point out that depending on the audiometric configuration and hearing aid characteristics, two individuals with the same PTA could have different aided SIIs, and therefore different auditory experiences.

The results of this study underscore the importance of audibility for lexical development, which in turn has significant implications for further development of language, reading, and academic skills.  Therefore, the early provision of audibility via appropriate and verifiable amplification appears to be an important step in the development of speech and language.  The SII, which is already incorporated into some real-ear systems or is available in a standalone software package, is a verification tool that should be considered a standard part of the fitting protocol for pediatric hearing aid patients.

 

References

American National Standards Institute (2007). Methods for calculation of the Speech Intelligibility index (ANSI S3.5-1997[R2007]). New York, NY: Author.

Amos, N.E. & Humes, L.E. (2007). Contribution of high frequencies to speech recognition in quiet and noise in listeners with varying degrees of high-frequency sensorineural hearing loss. Journal of Speech, Language and Hearing Research 50, 819-834.

Briscoe, J., Bishop, D.V. & Norbury, C.F. (2001). Phonological processing, language and literacy: a comparison of children with mild-to-moderate sensorineural hearing loss and those with specific language impairment. Journal of Child Psychology and Psychiatry 42, 329-340.

Davis, J.M., Elfenbein, J., Schum, R. & Bentler, R.A. (1986). Effects of mild and moderate hearing impairments on language, educational and psychosocial behavior of children. Journal of Speech and Hearing Disorders 51, 53-62.

Delage, H. & Tuller, L. (2007). Language development and mild-to-moderate hearing loss: Does language normalize with age? Journal of Speech, Language and Hearing Research 50, 1300-1313.

Dubno, J.R., Dirks, D.D. & Schaefer, A.B. (1989). Stop-consonant recognition for normal hearing listeners and listeners with high-frequency hearing loss. II: Articulation index predictions. The Journal of the Acoustical Society of America 85, 355-364.

Dunn, L.M. & Dunn, D.M. (1997). Peabody Picture Vocabulary Test – III. Circle Pines, MN: AGS.

Fitzpatrick, E., Durieux-Smith, A., Eriks-Brophy, A., Olds., J. & Gaines, R. (2007). The impact of newborn hearing screening on communications development. Journal of Medical Screening 14, 123-131.

Fletcher, H. (1929). Speech and hearing in communication. Princeton, NJ: Van Nostrand Reinhold.

Gilbertson, M. & Kamhi, A.G. (1995). Novel word learning in children with hearing impairment. Journal of Speech and Hearing Research 38, 630-642.

Glista, D., Scollie, S., Bagatto, M., Seewald, R., Parsa, V. & Johnson, A. (2009). Evaluation of nonlinear frequency compression: Clinical outcomes. International Journal of Audiology 48, 632-644.

Kirk, K.I. & Pisoni, D.B. (2000). Lexical Neighborhood Tests. St. Louis, MO:AudiTEC.

Magnusson, L., Karlsson, M. & Leijon, A. (2001). Predicted and measured speech recognition performance in noise with linear amplification. Ear and Hearing 22, 46-57.

Moeller, M.P. (2000). Early intervention and language development in children who are deaf and hard of hearing. Pediatrics 106, e43.

Norbury, C.F., Bishop, D.V. & Briscoe, J. (2001). Production of English finite verb morphology: A comparison of SLI and mild-moderate hearing impairment. Journal of Speech, Language and Hearing Research 44, 165-178.

Pavlovic, C.V., Studebaker, G.A. & Sherbecoe, R.L. (1986). An articulation index based procedure for predicting the speech recognition performance of hearing-impaired individuals. The Journal of the Acoustical Society of America 80, 50-57.

Scollie, S.D., Seewald, R., Cornelisse, L., Moodie, S., Bagatto, M., Laurnagary, D. & Pumford, J. (2005). The desired sensation level multistage input/output algorithm. Trends in Amplification 9(4), 159-197.

Souza, P.E. & Turner, C.W. (1999). Quantifying the contribution of audibility to recognition of compression-amplified speech. Ear and Hearing 20, 12-20.

Stanovich, K.E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly 21, 360-407.

Stelmachowicz, P.G., Hoover, B.M., Lewis, D.E., Kortekaas, R.W. & Pittman, A.L. (2000). The relation between stimulus context, speech audibility and perception for normal hearing and hearing-impaired children. Journal of Speech, Language and Hearing Research 43, 902-914.

Stelmachowicz, P.G., Pittman, A.L., Hoover, B.M. & Lewis, D.E. (2004 ). Novel word learning in children with normal hearing and hearing loss. Ear and Hearing 25, 47-56.

Tomasello, M. (2000). The item-based nature of children’s early syntactic development. Trends in Cognitive Sciences 4, 156-163.

Wake, M., Poulakis, Z., Hughes, E.K., Carey-Sargeant, C. & Rickards, F.W. (2005). Hearing impairment: A population study of age at diagnosis, severity and language outcomes at 7-8 years. Archives of Disease in Childhood 90, 238-244.

 

True or False? Two hearing aids are better than one.

McArdle, R., Killion, M., Mennite, M. & Chisolm, T. (2012).  Are Two Ears Not Better Than One? Journal of the American Academy of Audiology 23, 171-181.

This editorial discusses the clinical implications of an independent research study. This editorial does not represent the opinions of the original authors.

Audiologists are accustomed to recommending two hearing aids for individuals with bilateral hearing loss, based on the known benefits of binaural listening (Carhart, 1946; Keys, 1947; Hirsh, 1948; Koenig, 1950), though the potential advantages of binaural versus monaural amplification have been debated for many years.

One benefit of binaural listening, binaural squelch, occurs when the signal and competing noise come from different directions (Kock, 1950; Carhart, 1965). When the noise and signal come from different directions, time and intensity differences cause the waveforms arriving at each ear to be different, resulting in a dichotic listening situation. The central auditory system is thought to combine these two disparate waveforms and essentially subtract the waveform arriving at one side from that of the other, resulting in an effective SNR improvement of about 2-3dB (Dillon, 2001).

Binaural redundancy, another potential benefit of listening with two ears, is an advantage created simply by receiving similar information in both ears. Dillon (2001) describes binaural redundancy as allowing the brain to get two “looks” at the same sound, resulting in SNR improvement of another 1-2 dB (MacKeith & Coles, 1971; Bronkhorst & Plomp, 1988).

Though the benefits of binaural listening would imply benefits of binaural amplification as well, there has been a lack of consensus among researchers. Some studies have reported clear advantages to binaural amplification over monaural fittings, but others have not. Decades ago a number of studies were published on both sides of the argument, but differences in outcomes may have been related to speaker location and the presentation angles of the speech and noise signals (Ross, 1980) so the potential advantages of binaural amplification were still unclear.

Some recent reports have supported the use of monaural amplification over binaural for some individuals, in objective and subjective studies. Henkin et al. (2007) reported that 71% of their subjects performed better on a speech-in-noise task when fitted with one hearing aid on the “better” ear than when fitted with two hearing aids. Cox et al. (2011) reported that 46% of their subjects preferred to use one hearing aid rather than two.

In contrast, a report by Mencher & Davis (2006) concluded that 90% of adults perform better with two hearing aids. They explained that 10% of adults may have experienced negative binaural interaction or binaural interference, which is described as the inappropriate fusion of signals received at the two ears (Jerger et al., 1993; Chmiel et al., 1997).

The phenomenon of binaural interference and the potential advantage of monaural amplification was investigated by Walden & Walden (2005). In a speech recognition in noise task in which speech and the competing babble were presented through a single loudspeaker at 0-degrees azimuth, they found that performance with one hearing aid was better than binaural for 82% of their participants. This is in contrast to Jerger’s (1993) report of an incidence of 8-10% for subjects that might have experienced binaural interference. One criticism of Walden & Walden’s study is that their “monaural” condition left the unaided ear open. Their presentation level of 70dB HL and the use of subjects with mild to moderate hearing loss indicates that subjects were still receiving speech and noise cues in the unaided ear, resulting in an albeit modified, binaural listening situation. Furthermore, their choice of one single loudspeaker for presentation of noise and speech directly in front of the listener created a diotic listening condition, which eliminated the use of binaural head shadow cues. This methodology may have limited their study’s relevance to typical everyday situations in which listeners are engaged in face to face conversation with competing noise all around.

Because the potential advantages or disadvantages of binaural amplification have such important clinical implications, Rachel McArdle and her colleagues sought to clarify the issue with a two-part study of monaural and binaural listening. The first experiment was an effort to replicate Walden and Walden’s 2005 sound field study, this time adding a true monaural condition and an unaided condition. The second experiment examined monaural versus diotic and dichotic listening conditions, using real-world recordings from a busy restaurant.

Twenty male subjects were recruited from the Bay Pines Veteran’s Affairs Medical Facility. Subjects ranged in age from 59 to 85 years old and had bilateral, symmetrical hearing losses. All were experienced users of binaural hearing aids.

For the first experiment, subjects wore their own hearing aids, so a variety of models from different manufacturers were represented. Hearing aids were fitted according to NAL-NL1 prescriptive targets and were verified with real-ear measurements. All of the hearing aids were multi-channel instruments with directional microphones, noise reduction and feedback management. None of the special features were disabled during the study.

Subjects were tested in sound field, with a single loudspeaker positioned 3 feet in front of them. They were tested under five conditions: 1) right ear aided, left ear open, 2) left ear aided, right ear open, 3) binaurally aided, 4) right ear aided, left ear plugged (true monaural) and 5) unaided. The QuickSIN test (Killion et al, 2004) was used to evaluate sentence recognition in noise in all of these conditions. The QuickSIN test yields a value for “SNR loss”, which represents the SNR required to obtain a score of 50% correct for key words in the sentences.

The primary question of interest for the first experiment asked whether two aided ears would achieve better performance than one aided ear. The results showed that only 20% of the participants performed better with one aid, whereas 80% performed better with binaural aids. The lowest SNR loss values were for the binaural condition, followed by right ear aided, left ear aided, true monaural (with left ear plugged) and unaided. Analysis of variance revealed that the binaural condition was significantly better than all other conditions. The right-ear only condition was significantly better than unaided, but all other comparisons failed to reach significance.

The results of Experiment 1 are comparable to results reported by Jerger (1993) but contrast sharply with Walden and Walden’s 2005 study, in which 82% of respondents performed better monaurally aided.  To compare their results further to Walden and Walden’s, McArdle and her colleagues compiled scores for the subjects’ better ears and found that there was no significant difference between binaural and better ear performance, but both of these conditions were significantly better than the other conditions. They also examined the effect of degree of hearing loss and found that individuals with hearing thresholds poorer than 70dB HL were able to achieve about twice as much improvement from binaural amplification as those subjects with better hearing. Still, the results of Experiment 1 support the benefit of binaural hearing aids for most participants, especially those with poorer hearing.

The purpose of Experiment 2 was to further examine the potential benefit of hearing with two ears, using diotic and dichotic listening conditions. Diotic listening refers to a condition in which the listener receives the same stimulus in both ears, whereas dichotic listening refers to more typical real-world conditions in which each ear receives slightly different information, subject to head shadow and time and intensity differences.

Speech recognition was evaluated in four conditions: 1) monaural right, 2) monaural left, 3) diotic and 4) binaural or dichotic. Materials for the R-SPACE QSIN test (Revit, et al., 2007) were recorded through a KEMAR manikin with competing restaurant noise presented through eight loudspeakers. Recordings were taken from eardrum-position microphones on each side of KEMAR, thus preserving binaural cues that would be typical for a listener in a real-world setting.

In Experiment 2, subjects were not tested wearing hearing aids; the stimuli were presented via insert earphones. The use of recorded stimuli presented under earphones eliminated the potentially confounding factor of hearing aid technology on performance and allowed the presentation of real-world recordings in truly monaural, diotic and dichotic conditions.

The best performance was demonstrated in the binaural condition, followed by the diotic condition, then the monaural conditions. The binaural condition was significantly better than diotic and both the diotic and dichotic conditions were significantly better than the monaural conditions. Again in contrast to Walden and Walden’s study, 80% of the subjects scored better in the binaural condition than either of the monaural conditions and 65% of the subjects scored better in the diotic condition than either monaural condition. These results support the findings of the first experiment and indicate that for the majority of listeners, speech recognition in noise improves when two ears are listening instead of one. Furthermore, the finding that the binaural condition was significantly better than the diotic condition indicates that it is not only the use of two ears, but also the availability of binaural cues that have a positive impact on speech recognition in competing noise.

McArdle and her colleagues point out that their study, as well as other recent reports (Walden & Walden, 2005; Henkin, 2007), suggests that the majority of listeners perform better on speech-in-noise tasks when they are listening with two ears. When binaural time and intensity cues are available, performance is even better than with the same stimulus reaching each ear.  They also found that the potential benefit of binaural hearing was even more pronounced for individuals with more severe hearing loss. This supports the recommendation of binaural hearing aids for individuals with bilateral hearing loss, especially those with severe loss.

Cox et al (2011) reported that listeners who experienced better performance in everyday situations tended to prefer binaural hearing aid use, but also found that 43 out of 94 participants generally preferred monaural to binaural use over a 12-week trial. For new hearing aid users or prior monaural users, this is not surprising, as it can take time to adjust to binaural hearing aid use. Clinical observation suggests that individuals who have prior monaural hearing aid experience may have more difficulty adjusting to binaural use than individuals who are new to hearing aids altogether.  However, with consistent daily use, reasonable expectations and appropriate counseling, most users can successfully adapt to binaural use. It is possible that the subjects in Cox et al’s study who preferred monaural use were responding to factors other than performance in noise. If they were switching between monaural and binaural use, perhaps they did not wear the two instruments together consistently enough to fully acclimate to binaural use in the time allotted.

Though their study presented strong support for binaural hearing aid use, McArdle and her colleagues suggest that listeners may benefit from “self-experimentation” to determine the optimal configuration with their hearing aids. This suggestion is an excellent one, but it may be most helpful within the context of binaural use. Even patients with adaptive and automatic programs can be fitted with manually accessible programs designed for particularly challenging situations and should be encouraged to experiment with these programs to determine the optimal settings for their various listening needs.

Clinicians who have been practicing for several years may recall the days when hearing aid users often lost their hearing aids in restaurants because they had removed one aid in order to more easily ignore background noise. That is less likely to occur now, as current technology can help most hearing aid users function quite well in noisy situations. With directional microphones and multiple programs, along with the likelihood that speech and background noise are often spatially separated, binaural hearing aids are likely to offer advantageous performance for speech recognition in most acoustic environments. Bilateral data exchange and wireless communication offer additional binaural benefits, as two hearing instruments can work together to improve performance in noise and provide binaural listening for telephone or television use.

References

Bronkhorst, A.W. & Plomp, R. (1988). The effect of head induced interaural time and level differences on speech intelligibility in noise. Journal of the Acoustical Society of America 83, 1508-1516.

Carhart, R. (1965). Problems in the measurement of speech discrimination. Archives of Otolaryngology 82, 253-260.

Carhart, R. (1946). Selection of hearing aids. Archives of Otolaryngology 44, 1-18.

Chmiel, R., Jerger, J., Murphy, E., Pirozzolo, R. & Tooley, Y.C. (1997). Unsuccessful use of binaural amplification by an elderly person. Journal of the American Academy of Audiology 8, 1-10.

Cox, R.M., Schwartz, K.S., Noe, C.M. & Alexander, G.C. (2011). Preference for one or two hearing aids among adult patients. Ear and Hearing 32 (2), 181-197.

Dillon, H. (2001). Monaural and binaural considerations in hearing aid fitting. In: Dillon, H., ed. Hearing Aids. Turramurra, Australia: Boomerang Press, 370-403.

Henkin, Y., Waldman, A. & Kishon-Rabin, L. (2007). The benefits of bilateral versus unilateral amplification for the elderly: are two always better than one? Journal of Basic and Clinical Physiology and Pharmacology 18(3), 201-216.

Hirsh, I.J. (1948). Binaural summation and interaural inhibition as a function of the level of masking noise. American Journal of Psychology 61, 205-213.

Jerger, J., Silman, S., Lew, J. & Chmiel, R. (1993). Case studies in binaural interference: converging evidence from behavioral and electrophysiologic measures. Journal of the American Academy of Audiology 4, 122-131.

Keys, J.W. (1947). Binaural versus monaural hearing. Journal of the Acoustical Society of America 19, 629-631.

Killion, M.C., Niquette, P.A., Gudmundsen, G.I., Revit, L.J. & Banerjee, S. (2004). Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal hearing and hearing-impaired listeners. Journal of the Acoustical Society of America 116, 2395-2405.

Kock, W.E. (1950). Binaural localization and masking. Journal of the Acoustical Society of America 22, 801-804.

Koenig, W. (1950). Subjective effects in binaural hearing. [Letter to the Editor] Journal of the Acoustical Society of America 22, 61-62.

MacKeith, N.W. & Coles, R.A. (1971). Binaural advantages in hearing speech. Journal of Laryngology and Otology 85, 213-232.

McArdle, R., Killion, M., Mennite, M. & Chisolm, T. (2012).  Are Two Ears Not Better Than One? Journal of the American Academy of Audiology 23, 171-181.

Mencher, G.T. & Davis, A. (2006) Binaural or monaural amplification: is there a difference? A brief tutorial. International Journal of Audiology 45, S3-S11.

Revit, L., Killion, M. & Compton-Conley, C. (2007). Developing and testing a laboratory sound system that yields accurate real-world results. Hearing Review online edition, www.hearingreview.com, October 2007.

Ross, M. (1980). Binaural versus monaural hearing aid amplification for hearing impaired individuals. In: Libby, E.R., Ed. Binaural Hearing and Amplification. Chicago: Zenetron, 1-21.

Walden, T.C. & Walden, B.E. (2005). Monaural versus binaural amplification for adults with impaired hearing. Journal of the American Academy of Audiology 16: 574-584.

Cochlear Dead Regions and High Frequency Gain: How to Fit the Hearing Aid

Cox, R.M., Johnson, J.A. & Alexander, G.C. (2012). Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss. Ear and Hearing, May 2012, e-published ahead of print.

This editorial discusses the clinical implications of an independent research study. This editorial does not represent the opinions of the original authors.

Cochlear dead regions (DRs) are defined as a total loss of inner hair cell function across a limited region of the basilar membrane (Moore, et al., 1999b). This does not result in an inability to perceive sound at a given frequency range, rather the sound is perceived via a spread of excitation to adjacent regions in the cochlea where the inner hair cells are still functioning. Because the response is spread out over a broader tonotopic region, patients with cochlear dead regions may perceive some high frequency pure tones as “clicks”, “buzzes” or “whooshes” rather than tones.

Dead regions can be present at moderate hearing thresholds (e.g. 60dBHL) and are more likely to be present as the degree of loss increases. Psychophysical tuning curves are the preferred method for identifying cochlear dead regions in the laboratory, complicated and time consuming. Moore and his colleagues developed the Threshold Equalizing Noise (TEN) Test as a clinical means of identifying dead regions (Moore et al., 2000; Moore et al., 2004). The TEN procedure looks for shifts in masked thresholds beyond what would typically be expected for a given hearing loss.  A threshold obtained with TEN masking noise that shifts at least 10dB indicates the likely presence of a cochlear dead region.

Historically, there has been a lack of consensus among clinical investigators as to whether or not high frequency gain is beneficial for hearing aid users who have cochlear dead regions. Some studies suggest that high frequency gain could have deleterious effects on speech perception and should be limited for individuals with cochlear dead regions (Moore, 2001b; Turner, 1999; Padilha et al., 2007). For example, Vickers et al. (2001) and Baer et al. (2002) studied the benefit of high frequency amplification in quiet and noise for individuals with and without DRs. Both studies reported that individuals with DRs were unable to benefit from high frequency amplification. While Gordo & Iorio (2007) found that hearing aid users with DRs performed worse with high-frequency amplification than they did without it.

In contrast, Cox and her colleagues (2011) found beneficial effects of high frequency audibility whether or not the participants had dead regions. Others have reported equivalent performance for participants with and without dead regions for quiet and low noise conditions; however, in high noise conditions the individuals without dead regions demonstrated further improvement when additional high frequency amplification was provided, whereas participants with dead regions did not (Mackersie et al., 2004). This article is summarized in more detail here: http://blog.starkeypro.com/bid/66368/Recommendations-for-fitting-patients-with-cochlear-dead-regions

The current study was undertaken to examine the subjective and objective effect of high frequency amplification on matched pairs of participants (with and without DRs) in a variety of conditions. Participants were fitted with hearing aids that had two programs: the first (NAL) was based on the NAL-NL1 formula and the second (LP) was identical to the NAL-NL1 program below 1000Hz, with amplification rolled off above 1000Hz.  The goals of the study were to compare performance with these two programs, for individuals with and without dead regions. The following measures were conducted:

1) Speech discrimination in quiet laboratory conditions

2) Speech discrimination in noisy laboratory conditions

3) Subjective performance in everyday situations

4) Subjective preference for everyday situations

Participants were recruited from a pool of individuals who had previously been identified as typical hearing aid patients (Cox et. al., 2011). Participants had bilateral flat or sloping sensorineural hearing loss with thresholds above 25dB below 1kHz and thresholds of 60 to 90dB HL for at least part of the frequency range of 1-3kHz.

The TEN test (Moore et al., 2004) was administered to determine the presence of DRs. To be eligible for the study, participants needed to have one or more DRs in the better ear at or above 1kHz and no DRs below 1kHz. Participants were then divided into to two groups: the experimental group with DRs and the control group without DRs.  Individuals in the experimental group showed a diverse range of DR distribution across frequency. Almost half of the participants had DRs between 1-2kHz, whereas the remainder had DRs only at or above 3kHz. A little more than half of the participants had one DR only, whereas the others had more than one DR.

Individuals in the experimental group were matched in pairs with individuals from the control group. In total, there were 18 participant pairs; each matched for age, degree of hearing loss and slope of hearing loss. There were 24 men and 12 women. No attempt was made to match pairs based on gender.

Participants were fitted monaurally with behind-the-ear hearing aids coupled to vented skeleton earmolds. The monaural fitting was chosen to avoid complications when participants switched between the NAL and LP programs. Data collection was completed before the widespread availability of wireless hearing aids, so the participants would have had to reliably switch both hearing aids individually to the proper program every time to avoid making occasional subjective judgments based on mismatched programs.

The hearing aids had two programs: a program based on the NAL-NL1 prescription (NAL) and a program with high-frequency roll-off (LP). Participants were able to switch the programs themselves but could not identify the programs as NAL or LP. Half of the participants had NAL in P1 and LP in P2, whereas the other half had LP in P1 and NAL in P2.  Verification measures were conducted to ensure that the two programs matched below 1kHz and to make sure the participants judged the programs to be equally loud.

After a two week acclimatization period, participants returned for speech recognition testing and field trial training. Speech and noise stimuli were presented in a sound field with the unaided ear was plugged during testing. Speech recognition in quiet was evaluated using the Computer Assisted Speech Perception Assessment (CASPA; Mackersie, et al., 2001).  The CASPA test includes lists of 10 consonant-vowel-consonant words spoken by a female. Five lists were presented for each of the NAL and LP programs. Stimuli were presented at 65dB SPL.

Speech recognition in noise was evaluated with the Bamford-Kowell-Bench Speech in Noise (BKB-SIN test,  Etymotic Research, 2005), which contains sentences spoken by a male talker, masked by 4-talker babble. The test contains lists of 10 sentences with 31 scoring words. In each list, the signal-to-noise ratio (SNR) decreases by 3dB with each sentence, so that within any list the SNR ranges from +21dB to -6dB. Sentences were presented at 73dB, a “loud but OK” level, as recommended for this test.

Following the speech recognition testing, participants were trained in the field trial procedures for subjective ratings. They were asked to evaluate their ability to understand speech in everyday situations with the NAL and LP programs and identify occasions during which they felt they could understand some but not all of the words they were hearing. Participants were given booklets with daily rating sheets and listening checklists to record daily hours of hearing aid use and track the variety of their daily listening experiences.

After a two week field trial, participants returned to the laboratory for a second session of CASPA and BKB-SIN testing. They submitted ratings sheets and listening checklists and were interviewed about their preferred hearing aid program for everyday listening. The interview consisted of questions that covered program preferences related to:  understanding speech in quiet, understanding speech in noise, hearing over long distances, the sound of their own voice, sound quality, loudness, localization, the least tiring program and the one that provided the most comfortable sound. Participants were asked to indicate their preferred program for each of these criteria, as well as their preferred program for overall everyday use. They were asked to provide three reasons for overall preference.

Speech recognition testing in quiet revealed no difference in overall performance between the two groups, but there was a significant difference based on the hearing aid program that was used. Listeners from both the experimental group and the control group performed better with the broadband NAL program, though the difference between the NAL and LP programs was larger for the control group than the experimental group. This indicates that the individuals without DRs were able to derive more benefit from the additional high frequency information in the NAL program than the individuals with DRs did.

Speech recognition testing in noise revealed a similar finding but in this case the improvement with the NAL program was only significant for the control group. Although the experimental group’s mean scores with the NAL program were higher than those with the LP program, the difference did not reach statistical significance.  Because the BKB-SIN test used variable SNR levels, performance-intensity functions were constructed with scores obtained using the NAL and LP programs. These functions revealed that at any given SNR, speech was more intelligible with the NAL program. However, there was more of a difference between the NAL and LP functions for the control group than the experimental group, consistent with a program by group statistical interaction.

Subjective ratings of speech understanding revealed no significant difference between the experimental and control groups, but there was a significant difference based on program.  Participants from the control and experimental groups rated their performance better with the NAL program.

Interviews concerning program preference revealed that 23 participants preferred the NAL program and 11 preferred the LP program. There was no association with the presence of DRs. When the reasons supporting the participants’ preferences were analyzed, the most frequently mentioned reason for NAL preference was greater speech clarity. The most common reason for LP preference was that the other program (NAL) was too loud.

This investigation by Dr. Cox and her colleagues indicates that high-frequency amplification was beneficial to participants with single or multiple DRs, especially for speech recognition in quiet. In noise, participants with DRs still performed better with the NAL program, though the improvement was not as marked as it was for those without DRs. In field trials, participants with DRs reported more improvement with the NAL program than the control group did, indicating that perceived benefits in everyday situations exceeded any predictions of the laboratory results. At no point in the study did high-frequency amplification reduce performance for individuals with or without DRs.

This finding is in contrast with previous reports (Vinay & Moore, 2007a; Gordo & Iorio, 2007). Cox and her colleagues note that most of the participants in their study had only one or two DRs as opposed to several contiguous DRs. They allow that their findings might not relate to the performance of participants with several contiguous DRs, but point out that among typical hearing aid candidates, it is unlikely for individuals to have more than one or two DRs. With this consideration, the authors suggest that, high frequency amplification should not be reduced, even in cases with identified dead regions.

This study from the University of Memphis provides a recommendation for use of prescribed settings and against reduction of high frequency gain for hearing aid users with one or two DRs.  They found beneficial effects of high frequency amplification in laboratory and everyday environments and noted no circumstances in which listeners demonstrated deleterious effects of high frequency amplification. These results may not pertain to individuals with several contiguous DRs but they are pertinent to the majority of typical hearing aid wearers. Their findings also support the use of subjective performance measures, as these provided additional information that was sometimes in contrast to the laboratory results. They point out that laboratory results do not always predict performance in everyday life and it can be extrapolated that clinical measures of efficacy should always be supported with subjective reports of effectiveness, like self-assessment of comfort and acceptance.

References

Baer,  T., Moore, B.C. & Kluk, K. (2002). Effects of low pass filtering on the intelligibility of speecdh in noise for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America 112(3 pt. 1), 1133-1144.

Ching, T.Y., Dillon, H. & Byrne, D. (1998). Speech recognition of hearing-impaired listeners: predictions from audibility and the limited role of high-frequency amplification. Journal of the Acoustical Society of America 103, 1128-1140.

Cox, R. M., Alexander, G.C., & Johnson, J.A. (2011). Cochlear dead regions in typical hearing aid candidates: prevalence and implications for use of high-frequency speech cues. Ear and Hearing 32, 339-348.

Cox, R.M., Johnson, J.A. & Alexander, G.C. (2012). Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss. Ear and Hearing, May 2012, e-published ahead of print.

Etymotic Research (2005). BKB-SIN Speech in Noise Test, Version 1.03. Elk Grove Village, IL: Etymotic Research.

Moore, B.C., Glasberg, B. & Vickers, D.A. (1999b). Further evaluation of a model of loudness perception applied to cochlear hearing loss. Journal of the Acoustical Society of America 106, 898-907.

Moore, B.C., Huss, M. & Vickers, D.A. (2000). A test for the diagnosis of dead regions in the cochlea. British Journal of Audiology 34, 205-224.

Moore, B.C., Glasberg, B. R. & Stone, M.A. (2004). New version of the TEN test with calibrations in dB HL. Ear and Hearing 25, 478-487.

Gordo, A. & Iorio, M.C. (2007). Dead regions in the cochlea at high frequencies: Implications for the adaptation to hearing aids. Revista Brasileira de Otorrinolaringologia 73, 299-307.

Hogan, C.A. & Turner, C.W. (1998). High frequency audibility: benefits for hearing-impaired listeners. Journal of the Acoustical Society of America 104, 432-441.

Mackersie, C.L., Boothroyd, A. & Minniear, D. (2001). Evaluation of the Computer-Assisted-Speech Perception Assessment Test (CASPA).  Journal of the American Academy of Audiology 12, 390-396.

Mackersie, C.L., Crocker, T.L. & Davis, R.A. (2004). Limiting high-frequency hearing aid gain in listeners with and without suspected cochlear dead regions. Journal of the American Academy of Audiology 15, 498-507.

Moore, B.C. (2001a). Dead regions in the cochlear: Diagnosis, perceptual consequences and implications for the fitting of hearing aids. Trends in Amplification 5, 1-34.

Moore, B.C., (2001b). Dead regions in the cochlear: Implications for the choice of high-frequency amplification. In R.C. Seewald & J.S. Gravel (Eds). A Sound Foundation Through Early Amplification, p 153-166. Stafa, Switzerland: Phonak AG.

Padilha, C., Garcia, M.V., & Costa, M.J. (2007).  Diagnosing cochlear dead regions and its importance in the auditory rehabilitation process. Brazilian Journal of Otolaryngology 73, 556-561.

Turner, C.W. (1999). The limits of high-frequency amplification. Hearing Journal 52, 10-14.

Turner, C.W. & Cummings, K.J. (1999). Speech audibility for listeners with high-frequency hearing loss. American Journal of Audiology 8, 47-56.

Vickers, D.A., Moore, B.C. & Baer, T. (2001). Effects of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America 110, 1164-1175.

Vinay, B.T. & Moore, B.C. (2007a). Prevalence of dead regions in subjects with sensorineural hearing loss. Ear and Hearing 28, 231-241.

Transitioning the Patient with Severe Hearing Loss to New Hearing Aids

Convery, E., & Keidser, G. (2011). Transitioning hearing aid users with severe and profound loss to a new gain/frequency response: benefit, perception and acceptance. Journal of the American Academy of Audiology. 22, 168-180.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Hearing Technologies. This editorial does not represent the opinions of the original authors. 

Many individuals with severe-to-profound hearing loss are full-time, long-term hearing aid users. Because they rely heavily on their hearing aids for everyday communication, they are often reluctant to try new technology. It is common to see patients with severe hearing loss keep a set of hearing aids longer than those with mild-to-moderate losses. These older hearing aids offered less effective feedback suppression and a narrower frequency range than those available today now. The result was that many severely-impaired hearing aid users were fitted with inadequate high-frequency gain and compensatory increases in low-mid frequency amplification.  Having adapted to this frequency response, they may reject new hearing aids with increased high-frequency gain, stating that they sound too tinny or unnatural. Similarly, those who have adjusted to linear amplification may reject wide-dynamic-range compression (WDRC) as too soft, even though it the strategy may provide some benefits when compared to their linear hearing aids.

Convery and Keidser evaluated a method to gradually transition experienced, severely impaired hearing aid users into new amplification characteristics. They measured subjective and objective outcomes as subjects took incremental steps toward a more appropriate frequency response. Twenty-three experienced, adult hearing aid users participated in the study.   Participation was limited to subjects whose current gain and frequency response differed significantly from targets based on NAL-RP, a modification of the NAL formula for severe to profound hearing losses (Byrne, et al 1991).  Most subjects’ own instruments had more gain at 250-2000Hz and less gain at 6-8 kHz compared to NAL-RP targets, so the experimental transition involved adapting to less low and mid-frequency gain and more high frequency gain.

Subjects in the experimental group were fitted bilaterally with WDRC behind-the-ear hearing instruments. Directional microphones, noise reduction and automatic features were turned off and volume controls were activated with an 8dB range. The hearing aids had two programs: the first, called the “mimic” program,  had a gain/frequency response adjusted to match the subject’s current hearing aids. The second program was set to NAL-RP targets.  MPO was the same for mimic and NAL-RP programs. The programs were not manually accessible for the user, they were only adjusted by the experimenters at test sessions.

Four incremental programs were created for each participant in the experimental group. Each step was approximately a 25% progression from their mimic program frequency response to the NAL-RP prescribed response. At 3 week intervals, they were switched to the next incremental program, approaching NAL-RP settings as the experiment progressed.  The programs in the control group’s hearing aids remained consistent for the duration of the study.

All subjects attended 8 sessions. At the initial session, subjects’ own instruments were measured in a 2cc coupler and RECD measurements were obtained with their own earmolds. The experimental hearing aids were fitted at the next session and subjects returned for follow-up sessions at 1 week post-fitting and 3 week intervals thereafter until 15-weeks post-fitting.

Subjects evaluated the mimic and NAL-RP programs in paired comparisons at 1 week and 15 weeks post-fitting. The task used live dialogues with female talkers in four everyday environments: café, office, reverberant stairwell and outdoors with traffic noise in the background. Hearing aid settings were switched from mimic to NAL-RP with a remote control, without audible program change beeps, so subjects were unaware of their current program. They were asked to indicate their preference for one program over the other on a 4-point scale: no difference, slightly better, moderately better or much better.

Speech discrimination was evaluated with the Beautifully Efficient Speech Test (BEST; Schmitt, 2004) which measured the aided SRT for sentence stimuli. Loudness scaling was then conducted to determine the most comfortable loudness level and range (MCL/R).  Finally, subjects responded to a questionnaire concerning overall loudness comfort, speech intelligibility, sound quality, use of the volume control, use of their own hearing aids and perceived changes in audibility and comfort.  Speech discrimination, loudness scaling and questionnaire administration took place for all participants at 3 week intervals, starting at the 3 week post-fitting session.

One goal of the study was to determine if there would be a change in speech discrimination over time or a difference between the experimental and control groups. Analysis of BEST SRT scores yielded no significant difference between the experimental and control groups, nor was there a significant change in SRT over time. There was a significant interaction between these variables, indicating that the experimental group demonstrated slightly poorer SRT scores over time, whereas the control group’s SRTs improved slightly over time.

Subjects rated perceptual disturbance, or how much the hearing aid settings in the current test period differed from the previous period and how disturbing the difference was. There was no significant effect for the experimental or control groups, but there was a tendency for reports of perceptual disturbance over time to decrease for the control group and increase for the experimental group. The mimic programs for the control group were consistent, so control subjects likely became acclimated over time. The experimental group, however, had incremental changes to their mimic program at each session, so it is not surprising that they reported more perceptual disturbance. This was only a slight trend, however, indicating that even the experimental group experienced relatively little disturbance as their hearing aids  approached NAL-RP targets.

Analysis of the paired comparison responses indicated a significant overall preference for the mimic program over the NAL-RP program. There was an interaction between environment and listening program, showing a strong preference for the mimic program in office and outdoor environments and somewhat less of a preference in the café and stairwell environments. When asked about their criteria for the comparisons, subjects most commonly cited speech clarity, loudness comfort and naturalness, regardless of whether mimic fit or NAL-RP was preferred.  There was no significant effect of time on program preference, but there was a slight increase in the control group’s preference for mimic at the end of the study, whereas the experimental group shifted slightly toward NAL-RP, away from mimic.

Over the course of the study, Convery and Keidser’s subjects demonstrated acceptance of new frequency responses with less low- to mid-frequency gain and more high frequency gain than their current hearing aids. No significant differences were noted between experimental and control groups for loudness, sound quality, voice quality, intelligibility or overall performance, nor did these variables change significantly over time. Though all subjects preferred the mimic program overall, there was a trend for the experimental group to shift slightly toward a preference for the NAL-RP settings, whereas the control group did not. This indicates that the experimental subjects had begun to acclimate to the new, more appropriate frequency response. Acclimatization might have continued to progress, had the study examined performance over a longer period of time. Prior research indicates that acclimatization to new hearing aids can progress over the course of several months and individuals with moderate and severe losses may require more time to adjust than individuals with milder losses (Keidser et al, 2008).

Reports of perceptual disturbance increased as incremental programs approached NAL-RP settings. This may not be surprising to clinicians, as hearing aid patients often require a period of acclimatization even after relatively minor changes to their hearing aid settings. Furthermore, clinical observation supports the suggestion that individuals with severe hearing loss may be even more sensitive to small changes in their frequency response. Allowing more than three weeks between program changes may result in less perceptual disturbance and easier transition to the new frequency response. Clinically, perceptual disturbance with a new frequency response can also be mitigated by counseling and encouraging patients that they will feel more comfortable with the new hearing aids as they progress through their trial periods.  It might also be helpful to extend the trial period (which is usually 30-45 days) for individuals with severe to profound hearing losses, to accommodate an extended acclimatization period.

Individuals with severe-to-profound hearing loss often hesitate to try new hearing aids.  Similarly, audiologists may be reluctant to recommend new instruments with WDRC or advanced features for fear that they will be summarily rejected. Convery and Keidser’s results support a process for transitioning experienced hearing aid users into new technology and suggest an alternative for clinicians who might otherwise hesitate to attempt departures from a patient’s current frequency response.

Because this was a double-blind study, the research audiologists were unable to counsel subjects as they would in a typical clinical situation.  The authors note that counseling during transition is of particular importance for severely impaired hearing aid users, to ensure realistic expectations and acceptance of the new technology. Though the initial fitting may approximate the client’s old frequency response, follow-up visits at regular intervals should slowly implement a more desirable frequency response.  Periodically, speech discrimination and subjective responses should be evaluated and the transition should be stopped or slowed if decreases in intelligibility or perceptual disturbances are noted.

In addition to changes in the frequency response, switching to new hearing aid technology usually means the availability of unfamiliar features such as directional microphones, noise reduction and many wireless features. Special features such as these can be introduced after the client acclimates to the new frequency response, or they can be relegated to alternate programs to be used on an experimental basis by the client. For instance, automatic directional microphones are sometimes not well-received by individuals who have years of experience with omnidirectional hearing aids. By offering directionality in an alternate program, the individual can test it out as needed and may be less likely to reject the feature or the hearing aids.  It is critical to discuss proper use of the programs and to set up realistic expectations.  Because variable factors such as frequency resolution and sensitivity to incremental amplification changes may affect performance and acceptance, the transition period should be tailored to the needs of the individual and monitored closely with regular follow-up appointments.

References

Baer, T., Moore, B.C.J. & Kluk, K. (2002).  Effects of low pass filtering on the intelligibility of speech in noise for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America. 112, 1133-1144.

Barker, C., Dillon, H. & Newall, P. (2001). Fitting low ratio compression to people with severe and profound hearing losses. Ear and Hearing. 22, 130-141.

Byrne, D., Parkinson, A. & Newall, P. (1991).  Modified hearing aid selection procedures for severe/profound hearing losses. In: Studebaker, G.A. , Bess, F.H., Beck, L. eds. The Vanderbilt Hearing Aid Report II. Parkton, MD: York Press, 295-300.

Ching, T.Y.C., Dillon, H., Lockhart, F., vanWanrooy, E. & Carter, L. (2005). Are hearing thresholds enough for prescribing hearing aids? Poster presented at the 17th Annual American Academy of Audiology Convention and Exposition, Washington, DC.

Convery, E. & Keidser, G. (2011). Transitioning hearing aid users with severe and profound loss to a new gain/frequency response: benefit, perception and acceptance. Journal of the American Academy of Audiology. 22, 168-180.

Flynn, M.C., Davis, P.B. & Pogash, R. (2004). Multiple-channel non-linear power hearing instruments for children with severe hearing impairment: long-term follow-up. International Journal of Audiology. 43, 479-485.

Keidser, G., Hartley, D. & Carter, L. (2008). Long-term usage of modern signal processing by listeners with severe or profound hearing loss: a retrospective survey. American Journal of Audiology. 17, 136-146.

Keidser, G., O’Brien, A., Carter, L., McLelland, M., and Yeend, I. (2008) Variation in preferred gain with experience for hearing-aid users. International Journal of Audiology. 47(10), 621-635.

Kuhnel, V., Margolf-Hackl, S. & Kiessling, J. (2001).  Multi-microphone technology for severe to profound hearing loss. Scandanavian Audiology 30 (Suppl. 52), 65-68.

Moore, B.C.J. (2001). Dead regions in the cochlea: diagnosis, perceptual consequences and implications for the fitting of hearing aids. Trends in Amplification. 5, 1-34.

Moore, B.C.J., Killen, T. & Munro, K.J. (2003). Application of the TEN test to hearing-impaired teenagers with severe-to-profound hearing loss. International Journal of Audiology. 42, 465-474.

Schmitt, N. (2004). A New Speech Test (BEST Test). Practical Training Report. Sydney: National Acoustic Laboratories.

Vickers, D.A., Moore, B.C.J. & Baer, T. (2001). Effect of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America. 110, 1164-1175.

Addressing patient complaints when fine-tuning a hearing aid

Jenstad, L.M., Van Tasell, D.J. & Ewert, C. (2003). Hearing aid troubleshooting based on patient’s descriptions. Journal of the American Academy of Audiology 14 (7).

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

As part of any clinically robust protocol, a hearing aid fitting will be objectively verified with real-ear measures and validated with a speech-in-noise test. Fine tuning and follow-up adjustments are an equally important part of the fitting process. This stage of the routine fitting process does not follow standardized procedures and is almost always directed by a patient’s complaints or descriptions of real-world experience with the hearing aids. This can be a challenging dynamic for the clinician. Patients may have difficulty putting their auditory experience into words and different people may describe similar sound quality issues in different ways.  Additionally, there may be several ways to address any given complaint and a given programming adjustment may not have the same effect on different hearing aids.

Hearing aid manufacturers often include a fine-tuning guide or automated fitting assistant within their software to help the clinician make appropriate adjustments for common patient complaints. There are limitations to the effectiveness of these fine tuning guides in that they are inherently specific to a limited range of products and the suggested adjustments are subject to the expertise and resources of that manufacturer.  The manner in which a sound quality complaint is described may differ between manufacturers and the recommended adjustments in response to the complaint may differ as well.

There have been a number of efforts to develop a single hearing aid troubleshooting guide that could be used across devices and manufacturers (Moore et. al., 1998; Gabrielsson et al., 1979, 1988, 1990; Lundberg et al., 1992; Ovegard et al., 1997). The first and perhaps most challenging step toward this goal has been to determine the most common descriptors that patients use for sound quality complaints. Moore (1998) and his colleagues developed a procedure in which responses on three ratings scales (e.g., “loud versus quiet”, “tinny versus boomy”) were used to make adjustments to gain and compression settings. However, their procedure did not allow for the bevy of descriptors that patients create; limiting potential utility for everyday clinical settings.  Gabrielsson colleagues, in a series of Swedish studies, developed a set of reliable terms to describe sound quality. These descriptors have since been translated and used in English language research (Bentler et al., 1993).

As hearing instruments become more complicated with numerous adjustable parameters, and given the wide range of experience and expertise of individuals fitting hearing instruments today, an independent fine tuning guide is an appealing concept. Lorienne Jenstad and her colleagues proposed an “expert system” for troubleshooting hearing aid complaints.  The authors explained that expert systems “emulate the decision making abilities of human experts” (Tharpe et al., 1993).  To develop the system, two primary questions were asked:

1) What terms do hearing impaired listeners use to describe their reactions to specific hearing aid fitting problems?

2) What is the expert consensus on how these patient complaints can be addressed by hearing aid adjustment?

There were two phases to the project. To address question one, the authors surveyed clinicians for their reports on how patients describe sound quality with regard to specific fitting problems. To address question two, the most frequently reported descriptors from the clinicians’ responses were submitted to a panel of experts to determine how they would address the complaints.

The authors sent surveys to 1934 American Academy of Audiology members and received 311 qualifying responses. The surveys listed 18 open-ended questions designed to elicit descriptive terms that patients would likely use for hearing aid fitting problems. For example, the question “If the fitting has too much low-frequency gain…” yielded responses such as “hollow”, “plugged” and “echo”.  The questions probed common problems related to gain, maximum output, compression, physical fit, distortion and feedback.  The survey responses yielded a list of the 40 most frequent descriptors of hearing aid fitting problems, ranked according to the number of occurrences.

The list of descriptors was used to develop a questionnaire to probe potential solutions for each problem.  Each descriptor was put in the context of, “How would you change the fitting if your patient reports that ___?”, and 23 possible fitting solutions were listed.  These questionnaires were completed by a panel of experts with a minimum of five years of clinical experience. Respondents could offer more than one solution to a problem and the solutions were weighted based on the order in which they were offered. There was strong agreement among experts, suggesting that their responses could be used reliably to provide troubleshooting solutions based on sound quality descriptions. The expert responses also agreed with the initial survey that was sent to the group of 1934 audiologists, supporting the validity of these response sets.

The expert responses resulted in a fine-tuning guide in the form of tables or simplified flow charts. The charts list individual descriptors with potential solutions listed below in the order in which they should be attempted.  For example, below the descriptor “My ear feels plugged”, the first solution is to “increase vent” and the second is to “decrease low frequency gain”. The idea is that the clinician would first try to increase the vent diameter and if that didn’t solve the problem, they would move on to the second option, decreasing low frequency gain. If an attempted solution creates another sound quality problem, the table can be utilized to address that problem in the same way.

The authors correctly point out that there are limitations to this tool and that proposed solutions will not necessarily have the same results with all hearing aids. For instance, depending on the compressor characteristics, raising a kneepoint might increase OR decrease the gain at input levels below the kneepoint. It is up to the clinician to be familiar with a given hearing aid and its adjustable parameters to arrive at the appropriate course of action.

Beyond manipulation of the hearing aid itself, the optimal solution for a particular patient complaint might not be the first recommendation in any tuning guide. For instance, for the fitting problem labeled “Hearing aid is whistling”, the fourth solution listed in the table is “check for cerumen”.  This solution appeared fourth in the ranking based on the frequency of responses from the experts on the panel. However, any competent clinician who encounters a patient with hearing aid feedback should check for cerumen first before considering programming modifications.

The expert system proposed by Jenstad and her colleagues represents a thoroughly examined, reliable step toward development of a universal troubleshooting guide for clinicians. Their paper was published in 2003, so some items should be updated to suit modern hearing aids. For example, current feedback management strategies result in fewer and less challenging feedback problems.  Solutions for feedback complaints might now include, “calibrate feedback management system” versus gain or vent adjustments. Similarly, most hearing aids now have solutions for listening in noise that extend beyond the simple inclusion of directional microphones, so “directional microphone” might not be an appropriately descriptive solution to address complaints about hearing in noise, as the patient is probably already using a directional microphone.

Overall, the expert system proposed by Jenstad and colleagues is a helpful clinical tool; especially if positioned as a guide to help patients find the appropriate terms to describe their perceptions. However, as the authors point out, it is not meant to replace prescriptive methods, measures of verification and validation, or the expertise of the audiologist. The responsibility is with the clinician to be informed about current technology and its implications for real world hearing aid performance and to communicate with their patients in enough detail to understand their patients’ comments and address them appropriately.

References

Bentler, R.A., Nieburh, D.P., Getta, J.P. & Anderson, C.V. ( 1993). Longitudinal study of hearing aid effectiveness II: subjective measures. Journal of Speech and Hearing Research 36, 820-831.

Jenstad, L.M., Van Tasell, D.J. & Ewert, C. (2003). Hearing aid troubleshooting based on patient’s descriptions. Journal of the American Academy of Audiology 14 (7).

Moore, B.C.J., Alcantara, J.I. & Glasberg, B.R. (1998). Development and evaluation of a procedure for fitting multi-channel compression hearing aids. British Journal of Audiology 32, 177-195.

Gabrielsson A. ( 1979). Dimension analyses of perceived sound quality of sound-reproducing systems. Scandinavian Journal of Psychology 20, 159-169.

Gabrielsson, A., Hagerman, B., Bech-Kristensen, T. & Lundberg, G. (1990). Perceived sound quality of reproductions with different frequency responses and sound levels. Journal of the Acoustical Society of America 88, 1359-1366.

Gabrielsson, A. Schenkman, B.N. & Hagerman, B. (1988). The effects of different frequency responses on sound quality judgments and speech intelligibility. Journal of Speech and Hearing Research 31, 166-177.

Lundberg, G., Ovegard, A., Hagerman, B., Gabrielsson, A. & Brandstom, U. (1992). Perceived sound quality in a hearing aid with vented and closed earmold equalized in frequency response. Scandinavian Audiology 21, 87-92.

Ovegard, A., Lundberg, G., Hagerman, B., Gabrielsson, A., Bengtsson, M. & Brandstrom, U. (1997). Sound quality judgments during acclimatization of hearing aids. Scandinavian Audiology 26, 43-51.

Schweitzer, C., Mortz, M. & Vaughan, N. (1999). Perhaps not by prescription – but by perception. High Performance Hearing Solutions 3, 58-62.

Tharpe, A.M., Biswas, G. & Hall, J.W. (1993). Development of an expert system for pediatric auditory brainstem response interpretation. Journal of the American Academy of Audiology 4, 163-171.

Recommendations for fitting patients with cochlear dead regions

Cochlear Dead Regions in Typical Hearing Aid Candidates:

Prevalence and Implications for Use of High-Frequency Speech Cues

Cox, R.M., Alexander, G.C., Johnson, J. & Rivera, I. (2011).  Cochlear dead regions in typical hearing aid candidates: Prevalence and implications for use of high-frequency speech cues. Ear & Hearing 32 (3), 339-348.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Audibility is a well-known predictor of speech recognition ability (Humes, 2007) and audibility of high-frequency information is of particular importance for consonant identification.  Therefore, audibility of high-frequency speech cues is appropriately regarded as an important element of successful hearing aid fittings (Killion & Tillman, 1982; Skinner & Miller, 1983). In contrast to this expectation, some studies have reported that high-frequency gain might have limited or even negative impact on speech recognition abilities of some individuals (Murray & Byrne, 1986; Ching et al., 1998; Hogan & Turner, 1998). These researchers observed that when high-frequency hearing loss exceeded 55-60dB, some listeners were unable to benefit from increased high-frequency audibility.  A potential explanation for this variability was provided by Brian Moore (2001), who suggested that an inability to benefit from amplification in a particular frequency region could be due to cochlear “dead regions” or regions where there is a loss of inner hair cell functioning.

Moore suggested that hearing aid fittings could potentially be improved if clinicians were able to identify patients with cochlear dead regions (DRs). Working under the assumption that diagnosis DRs may contraindicate high-frequency amplification. He and his colleagues developed the TEN test as a method of determining the presence of cochlear dead regions (Moore et al., 2000, 2004). The advent of the TEN test provided a standardized measurement protocol for DRs, but there is still wide variability in the reported prevalence of DRs. Estimates range from as 29% (Preminger et a., 2005) to as high as 84% (Hornsby & Dundas, 2009), with other studies reporting DR prevalence somewhere in the middle of that range. Several potential factors are likely to contribute to this variability, including degree of hearing loss, audiometric configuration and test technique.

In addition to the variability in reported prevalence of DRs, there is also variability in the reports of how DRs affect the ability to benefit from high-frequency speech cues (Vickers et al., 2001; Baer et al., 2002; Mackersie et al., 2004). It remains unclear as to whether high-frequency amplification recommendations should be modified to reflect the presence of DRs.  Most research is in agreement that as hearing thresholds increase, the likelihood of DRs also increases.  Hearing aid users with severe to profound hearing losses are likely to have at least one DR. Because a large proportion of hearing aid users have moderate to severe hearing losses, Dr. Cox and her colleagues wanted to determine the prevalence of DRs in this population. In addition, they examined the effect of DRs on the use of high-frequency speech cues by individuals with moderate to severe loss.

Their study addressed two primary questions:

1) What is the prevalence of dead regions (DRs) among listeners with hearing thresholds in the 60-90dB range?

2) For individuals with hearing loss in the 60-90dB range, do those with DRs differ from those without DRs in their ability to use high-frequency speech cues?

One hundred and seventy adults with bilateral, flat or sloping sensorineural hearing loss were tested. All subjects had thresholds of 60 to 90dB in the better ear for at least part of the range from 1-3kHz and thresholds no better than 25dB for frequencies below 1kHz. Subjects ranged in age from 38 to 96 years, and 59% of the subjects had experience with hearing aids.

First, subjects were evaluated for the presence of DRs with the TEN test. Then, speech recognition was measured using high-frequency emphasis (HFE) and high-frequency emphasis, low-pass filtered (HFE-LP) stimuli from the QSIN test (Killion et al. 2004). HFE items on this test are amplified up to 32dB above 2.5kHz, whereas the HFE-LP items have much less gain in this range. Comparison of subjects’ responses to these two types of stimuli allowed the investigators to assess changes in speech intelligibility with additional high frequency cues. Presentation levels for the QSIN were chosen by using a loudness scale and bracketing procedure to arrive at a level that the subject considered “loud but okay”. Finally, audibility differences for the two QSIN conditions were estimated using the Speech Intelligibility Index based on ANSI 3.5-1997 (ANSI, 1997).

The TEN test results revealed that 31% of the participants had DRs at one or more test frequencies. Of the 307 ears tested, 23% were found to have a DR for one or more frequencies. Among those who tested positive for DRs, about 1/3 had DRs in both ears and 2/3 had DRs in one ear or the other in equal proportion. Mean audiometric thresholds were essentially identical for the two groups below 1kHz, but above 1kHz thresholds were significantly poorer for the group with DRs than for the group without DRs.  DRs were most prevalent at frequencies above 1.5kHz. There were no age or gender differences.

On the QSIN test, the mean HFE-LP scores were significantly poorer than the mean HFE scores for both groups.  There was also a significant difference in performance based on whether or not the participants had DRs. Perhaps more interestingly, there was a significant interaction between the DR group and test stimuli conditions, in that the additional high-frequency information in the HFE stimuli resulted in slightly greater performance gains for the group without DRs than it did for the group with DRs.  Furthermore, subjects with one or more isolated DRs were more able to benefit from the high frequency cues in the HFE lists than were those subjects with multiple, contiguous DRs. Although there were a few isolated individuals who demonstrated lower scores for the HFE stimuli, the differences were not significant and could have been explained by measurement error. Therefore, the authors conclude that the additional high frequency information in the HFE stimuli was not likely to have had a detrimental effect on performance for these individuals.

As had also been reported in previous studies, subject groups with DRs had poorer mean audiometric thresholds than the groups without DRs, so it was possible that audibility played a role in QSIN performance. Analysis of the audibility of QSIN stimuli for the two groups revealed that high frequency cues in the HFE lists were indeed more audible for the group without DRs. In accounting for this audibility effect, the presence of DRs still had a small but significant effect on performance.

The results of this study suggest that listeners with cochlear DRs still benefit from high frequency speech cues, albeit slightly less than those without dead regions.  The performance improvements were small and the authors caution that it is premature to draw firm conclusions about the clinical implications of this study.  Despite the need for further examination, the results of the current study certainly do not support any reduction in prescribed gain for hearing aid candidates with moderate to severe hearing losses.  The authors acknowledge, however, that because the findings of this and other studies are based on group data, it is possible that specific individuals may be negatively affected by amplification within dead regions. Based on the research to date, this seems more likely to occur in individuals with profound hearing loss who may have multiple, contiguous DRs.

More study is needed to determine the most effective clinical approach to managing cochlear dead regions in hearing aid candidates. Future research should be done with hearing aid users, including for example, the effects of noise on everyday hearing aid performance for individuals with DRs. A study by Mackersie et. al. (2004) showed that subjects with DRs suffered more negatives effects of noise than did the subjects without DRs. If there is a convergence of evidence to this effect, then recommendations about the use of high frequency gain, directionality and noise reduction could be determined as they relate to DRs. For now, Dr. Cox and her colleagues recommend that until there are clear criteria to identify individuals for whom high frequency gain could have deleterious effects, clinicians should continue using best-practice protocols and provide high frequency gain according to current prescriptive methods.

References

ANSI ( 1997). American National Standard Methods for Calculation of the Speech Intelligibility Index (Vol. ANSI S3.5-1997). New York: American National Standards Institute.

Ching,T., Dillon, H. & Byrne, D. (1998). Speech recognition of hearing-impaired listeners: Predictions from audibility and the limited role of high-frequency amplification. Journal of the Acoustical Society of America 103, 1128-1140.

Cox, R.M., Alexander, G.C., Johnson, J. & Rivera, I. (2011).  Cochlear dead regions in typical hearing aid candidates: Prevalence and implications for use of high-frequency speech cues. Ear & Hearing 32 (3), 339-348.

Hogan, C.A. & Turner, C.W. (1998). High-frequency audibility: Benefits for hearing-impaired listeners. Journal of the Acoustical Society of America 104, 432-441.

Humes, L.E. (2007). The contributions of audibility and cognitive factors to the benefit provided by amplified speech to older adults. Journal of the American Academy of Audiology 18, 590-603.

Killion, M. C. & Tillman, T.W. (1982). Evaluation of high-fidelity hearing aids. Journal of Speech and Hearing Research 25, 15-25.

Moore, B.C.J. (2001). Dead regions in the cochlear: Diagnosis, perceptual consequences and implications for the fitting of hearing aids. Trends in Amplification 5, 1-34.

Moore, B.C.J., Huss, M., Vickers, D.A.,  et al. (2000). A test for the diagnosis of dead regions in the cochlea. British Journal of Audiology 34, 2-5-224.

Moore, B.C.J., Glasberg, B.R., Stone, M.A. (2004). New version of the TEN test with calibrations in dB HL. Ear and Hearing 25, 478-487.

Murray, N. & Byrne, D. (1986). Performance of hearing-impaired and normal hearing listeners with various high-frequency cut-offs in hearing aids. Australian Journal of Audiology 8, 21-28.

Skinner, M.W. & Miller, J.D. (1983). Amplification bandwidth and intelligibility of speech in quiet and noise for listeners with sensorineural hearing loss.  Audiology 22, 253-279.

A preferred speech stimulus for testing hearing aids

Development and Analysis of an International Speech Test Signal (ISTS)

Holube, I., Fredelake, S., Vlaming, M. & Kollmeier, B. (2010). Development and analysis of an international speech test signal (ISTS). International Journal of Audiology, 49, 891-903.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Current hearing aid functional verification measures are described in the standards IEC 60118 and ANSI 3.22 and use stationary signals, including sine wave frequency sweeps and unmodulated noise signals. Test stimuli are presented to the hearing instrument and frequency specific gain and output is measured in a coupler or ear simulator.  Current standardized measurement methods require the instrument to be set at maximum or a reference test setting and adaptive parameters such as noise reduction and feedback management are turned off.

These procedures provide helpful information for quality assurance and determining fitting ranges for specific hearing aid models. However, because they were designed for linear, time-invariant hearing instruments, they have limitations for today’s nonlinear, adaptive instruments and cannot provide meaningful information about real-life performance in the presence of dynamically changing acoustic environments.

Speech is the most important stimulus encountered by hearing aid users and nonlinear hearing aids with adaptive characteristics process speech differently than they do stationary signals like sine waves and unmodulated noise. Therefore, it seems preferable for standardized test procedures to use stimuli that are as close as possible to natural speech.  Indeed, there are some hearing aid test protocols that use samples of natural speech or live speech. But natural speech stimuli will have different spectra, fundamental frequencies, and temporal characteristics depending on the speaker, the source material and the language. For hearing aid verification measures to be comparable to each other it is necessary to have standardized stimuli that can be used internationally.

Alternative test stimuli have been proposed based on the long-term average speech spectrum (Byrne et al., 1994) or temporal envelope fluctuations (Fastl, 1987). The International Collegium for Rehabilitative Audiology (ICRA) developed a set of stimuli (Dreschler, 2001) that reflect the long-term average speech spectrum and have speech-like modulations that differ across frequency bands.  ICRA stimuli have advantages over modulated noise and sine wave stimuli in that they share some similar characteristics with speech, but they lack speech-like comodulation characteristics (e.g., fundamental frequency). Furthermore, ICRA stimuli are often classified by signal processing algorithms as “noise” rather than “speech”, so they are less than optimal for measuring how hearing aids process speech.

The European Hearing Instrument Manufacturers Association (EHIMA) is developing a new measurement procedure for nonlinear, adaptive hearing instruments and an important part of their initiative is development of a standardized test signal or International Speech Test Signal (ISTS).  The development and analysis of the ISTS was described in a paper by Holube, et al. (2010).

There were fifteen articulated requirements for the ISTS, based on available test signals and knowledge of natural speech, the most clinically salient of which are:

  • The ISTS should resemble normal speech but should be non-intelligible.
  • The ISTS should be based on six major languages, representing a wide range of phonological structures and fundamental frequency variations.
  • The ISTS should be based on female speech and should deviate from the international long-term average speech spectrum (ILTASS) for females by no more than 1dB.
  • The ISTS should have a bandwidth of 100 to 16,000Hz and an overall RMS level of 65dB.
  • The dynamic range should be speech-like and comparable to published values for speech (Cox et al., 1988; Byrne et al., 1994).
  • The ISTS should contain voiced and voiceless components. Voiced components should have a fundamental frequency characteristic of female speech.
  • The ISTS should have short-term spectral variations similar to speech (e.g., formant transitions).
  • The ISTS should have modulation characteristics similar to speech (Plomp, 1984).
  • The ISTS should contain short pauses similar to natural running speech.
  • The ISTS stimulus should have a 60 second duration, from which other durations can be derived.
  • The stimulus should allow for accurate and reproducible measurements regardless of signal duration.

Twenty-one female speakers of six different languages (American English, Arabic, Mandarin, French, German and Spanish) were recorded while reading a story, the text and translations of which came from the Handbook of the International Phonetic Association (IPA).  One recording from each language was selected based on a number of criteria including voice quality, naturalness and median fundamental frequency. The recordings were filtered to meet the ILTASS characteristics described by Byrne et al. (1994) and were then split into 500ms segments that roughly corresponded to individual syllables. These syllable-length segments were attached in pseudo-random order to generate sections of 10 or 15 milliseconds. Each of the resulting sections could be combined to generate different durations of the ISTS stimulus and no single language was used more than once in any 6-segment section.  Speech interval and pause durations were analyzed to ensure that ISTS characteristics would closely resemble natural speech patterns.

For analysis purposes, a 60-second ISTS stimulus was created by concatenation of 10- and 15-second sections.  This ISTS stimulus was measured and compared to natural speech and ICRA-5 stimuli based on several criteria:

  • Long-term average speech spectrum (LTASS)
  • Short term spectrum
  • Fundamental frequency
  • Proportion of voiceless segments
  • Band-specific modulation spectra
  • Comodulation characteristics
  • Pause and speech duration
  • Dynamic range (spectral power level distribution)

On all of the analysis criteria, the ISTS stimulus resembled natural speech stimuli as well or better than ICRA-5 stimuli. Notable improvements for the ISTS over the ICRA-5 stimulus were its comodulation characteristics and dynamic range of 20-30dB, as well as pauses and combinations of voiced and voiceless segments that more closely resembled the distributions in natural speech.  Overall, the ISTS was deemed an appropriate speech-like stimulus proposal for the new standard measurement protocol.

Following the detailed analysis, the ISTS stimulus was used to measure four different hearing instruments, which were programmed to fit a flat, sensorineural hearing loss of 60dBHL.  Each instrument was nonlinear with adaptive noise reduction, compression and feedback management characteristics. The first-fit algorithms from each manufacturer were used, with all microphones fixed to an omnidirectional mode.  Instead of yielding gain and output measurements across frequency for one input level, the results showed percentile dependent gain (99th, 65th and 30th) across frequency as referenced to the long-term average speech spectrum.  The percentile dependent gain values provided information about nonlinearity, in that the softer components of speech were represented by the 30th percentile, moderate and loud speech components were represented by the 65th and 99th percentiles, respectively.  Relations between these three percentiles represented the differences in gain for soft, moderate and loud sounds.

The measurement technique described by Holube and colleagues, using the ISTS stimulus, offers significant advantages over current measurement protocols with standard sine wave or noise stimuli. First and perhaps most importantly, it allows hearing instruments to be programmed to real-life settings with adaptive signal processing features active. It measures how hearing aids process a stimulus that very closely resembles natural speech, so clinical verification measures may provide more meaningful information about everyday performance. By showing changes in percentile gain values across frequency, it also allows compression effects to be directly visible and may be used to evaluate noise reduction algorithms as well. The authors also note that the acoustic resemblance of ISTS to speech with its lack of linguistic information may have additional applications for diagnostic testing, telecommunications or communication acoustics.

The ISTS is currently available in some probe microphone equipment and will likely be introduced in most commercially available equipment over the next few years. Its introduction brings a standardized speech stimulus, for the testing of hearing aids, to the clinic. An important component of clinical best practice is the measurement of a hearing aid’s response characteristics. This is most easily accomplished through insitu probe microphone measurement in combination with a speech test stimulus such as the ISTS.

References

American National Standards Institute (ANSI ). ANSI S3.22-2003. Specification of hearing aid characteristics. New York: Acoustical Society of America.

Byrne, D., Dillon, H., Tran, K., Arlinger, S. & Wibraham, K. (1994). An international comparison of long0term average speech spectra. Journal of the Acoustical Society of America, 96(4), 2108-2120.

Cox, R.M., Matesich, J.S. & Moore, J.N. (1988). Distribution of short-term rms levels in conversational speech. Journal of the Acoustical Society of America, 84(3), 1100-1104.

Dreschler, W.A., Verschuure, H., Ludvigsen, C. & Westerman, S. (2001). ICRA noises: Artificial noise signals with speech-like spectral and temporal properties for hearing aid assessment. Audiology, 40, 148-157.

Fastl, H. (1987). Ein Storgerausch fur die Sprachaudiometrie. Audiologische Akustik, 26, 2-13.

Holube, I., Fredelake, S., Vlaming, M. & Kollmeier, B. (2010). Development and analysis of an international speech test signal (ISTS). International Journal of Audiology, 49, 891-903.

International Electrotechnical Commission, 1994, IEC 60118-0. Hearing Aids: Measurement of electroacoustical characteristics, Bureau of the International Electrotechnical Commission, Geneva, Switzerland.

IPA, 1999. Handbook of the International Phonetic Association. Cambridge University Press.

Plomp, R. (1984). Perception of speech as a modulated signal. In M.P.R. van den Broeche, A. Cohen (eds), Proceedings of the 10th International Congress of Phonetic Sciences, Utrecht, Dordrecht: Foris Publications, 29-40.