Starkey Research & Clinical Blog

Differences Between Directional Benefit in the Lab and Real-World

Relationship Between Laboratory Measures of Directional Advantage and Everyday Success with Directional Microphone Hearing Aids

Cord, M., Surr, R., Walden, B. & Dyrlund, O. (2004). Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids.Journal of the American Academy of Audiology 15, 353-364.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

People with hearing loss require a better signal-to-noise ratio (SNR) than individuals with normal hearing (Dubno et al, 1984; Gelfand et al, 1988; Bronkhorst and Plomp, 1990).  Among many technological improvements, a directional microphone is arguably the only effective hearing aid feature for improving SNR and subsequently, improving speech understanding in noise. A wide range of studies support the benefit of directionality for speech perception in competing noise (Agnew & Block, 1997; Nilsson et al, 1994; Ricketts and Henry, 2002; Valente, 1995) Directional benefit is defined as the difference in speech recognition ability between omnidirectional and directional microphone modes. In laboratory conditions, directional benefit averages around 7-8dB but varies considerably and has ranged from 2-3dB up to 14-16dB (Valente et al, 1995; Agnew & Block, 1997).

An individual’s perception of directional benefit varies considerably among hearing aid users. Cord et al (2002) interviewed individuals who wore hearing aids with switchable directional microphones and 23% reported that they did not use the directional feature. Many respondents said they had initially tried the directional mode but did not notice adequate improvement in their ability to understand speech and therefore stopped using the directional mode. This discrepancy between measured and perceived benefit has prompted exploration of the variables that affect performance with directional hearing aids. Under laboratory conditions, Ricketts and Mueller (2000) examined the effect of audiometric configuration, degree of high frequency hearing loss and aided omnidirectional performance on directional benefit, but found no significant interactions among any of these variables.

The current study by Cord and her colleagues examined the relationship between measured directional advantage in the laboratory and success with directional microphones in everyday life. The authors studied a number of demographic and audiological variables, including audiometric configuration, unaided SRT, hours of daily hearing aid use and length of experience with current hearing aids, in an effort to determine their value for predicting everyday success with directional microphones.

Twenty hearing-impaired individuals were selected to participate in one of two subject groups. The “successful” group consisted of individuals who reported regular use of omnidirectional and directional microphone modes. The “unsuccessful” group of individuals reported not using their directional mode and using their omnidirectional mode all the time. Analysis of audiological and demographic information showed that the only significant differences in audiometric threshold between the successful and unsuccessful group were at 6-8 kHz, otherwise the two groups had very similar audiometric configurations, on average. There were no significant differences between the two groups for age, unaided SRT, unaided word recognition scores, hours of daily use or length of experience with hearing aids.

Subjects were fitted with a variety of styles – some BTE and some custom – but all had manually accessible omnidirectional and directional settings. The Hearing in Noise Test (HINT; Nilsson et al, 1994) was administered to subjects with their hearing aids in directional and omnidirectional modes. Sentence stimuli were presented in front of the subject and correlated competing noise was presented through three speakers: directly behind the subject and on each side. Following the HINT participants completed the Listening Situations Survey (LSS), a questionnaire developed specifically for this study. The LSS was designed to assess how likely participants were to encounter disruptive background noise in everyday situations, to determine if unsuccessful and successful directional microphone users were equally likely to encounter noisy situations in everyday life.  The survey consisted of four questions:

1) On average, how often are you in listening situations in which bothersome background noise is present?

2) How often are you in social situations in which at least 3 other people are present?

3) How often are you in meetings (e.g. community, religious, work, classroom, etc.)?

4) How often are you talking with someone in a restaurant or dining hall setting?

The HINT results suggest average directional benefit of 3.2dB for successful users and 2.1dB for unsuccessful users. Although directional benefit was slightly greater for the successful users, the difference between the groups was not statistically significant.  There was a broad range of directional benefit for both groups: from -0.8 to 6.0dB for successful users and from -3.4 to 10.5dB for the unsuccessful users. Interestingly, three of the ten successful users obtained little or no directional benefit, whereas seven of the ten unsuccessful users obtained positive directional benefit.

Analysis of the LSS results showed that successful users of directional microphones were somewhat more likely than unsuccessful users to encounter listening situations with bothersome background noise and to encounter social situations with more than three other people present. However, statistical analysis showed no significant differences between the two groups for any items on the LSS survey, indicating that users who perceived directional benefit and used their directional microphones were not significantly more likely to encounter noisy situations in everyday life.

These observations led the authors to conclude that directional benefit as measured in the laboratory did not predict success with directional microphones in everyday life. Some participants with positive directional advantage scores were unsuccessful directional microphone users and conversely, some successful users showed little or no directional advantage. There are a number of potential explanations for their findings. First, despite the LSS results, it is possible that unsuccessful users did not encounter real-life listening situations in which directional microphones would be likely to help. Directional microphone benefit is dependent on specific characteristics of the listening environment (Cord et al, 2002; Surr et al, 2002; Walden et al, 2004), and is most likely to help when the speech source is in front of and relatively close to the listener, with spatial separation between the speech and noise sources. Individuals who rarely encounter this specific listening situation would have limited opportunity to evaluate directional microphones and may therefore perceive only limited benefit from them.

Unsuccessful directional microphone users may have also had unrealistically high expectations about directional benefits. Directionality can be a subtle but effective way of improving speech understanding in noise. Reduction of sound from the back and sides helps the listener focus attention on the speaker and ignore competing noise. Directional benefit is based on the concept of face-to-face communication, if users expect their hearing aids to reduce all background noise from all angles they are likely to be disappointed. Similarly, if they expect the aids to completely eliminate background noise, rather than slightly reduce it, they will be unimpressed. It is helpful for hearing aid users, especially those new to directional microphones, to be counseled about realistic expectations as well as proper positioning in noisy environments. If listeners know what to expect and are able to position themselves for maximum directional effect they are more likely to perceive benefit from their hearing aids in noisy conditions.

To date, it has been difficult to correlate directional benefit under laboratory conditions with perceived directional benefit. It is clear that directionality offers performance benefits in noise, but directional benefit measured in a sound booth does not seem to predict everyday success with directional microphones. There are many factors that are likely affect real-life performance with directional microphone hearing aids, including audiometric variables, the frequency response and gain equalization of the directional mode, the venting of the hearing aid and the contribution of visual cues to speech understanding (Ricketts, 2000a; 2000b). Further investigation is still needed to elucidate the impact of these variables on the everyday experiences of hearing aid users.

As is true for all hearing aid features, directional microphones must be prescribed appropriately and hearing aid users should be counseled about realistic expectations and appropriate circumstances in which they are beneficial. Although most modern hearing instruments have the ability to adjust automatically to changing environments, manually accessed directional modes offer hearing aid wearers increased flexibility and may increase use by allowing the individual to make decisions regarding their improved comfort and performance in noisy places. Routine reinforcement of techniques for proper directional microphone use are encouraged. Hearing aid users should be encouraged to experiment with their directional programs to determine where and when they are most helpful. For the patient, proper identification of and positioning in noisy environments is essential step toward meeting their specific listening needs and preferences.

References

Agnew, J. & Block, M. (1997). HINT thresholds for a dual-microphone BTE. Hearing Review 4, 26-30.

Bronkhorst, A. & Plomp, R. (1990). A clinical test for the assessment of binaural speech perception in noise. Audiology 29, 275-285.

Cord, M.T., Surr, R.K., Walden, B.E. & Olson, L. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology 13, 295-307.

Cord, M., Surr, R., Walden, B. & Dyrlund, O. (2004). Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids. Journal of the American Academy of Audiology 15, 353-364.

Dubno, J.R., Dirks, D.D. & Morgan, D.E. (1984).  Effects of age and mild hearing loss on speech recognition in noise. Journal of the Acoustical Society of America 76, 87-96.

Gelfand, S.A., Ross, L. & Miller, S. (1988). Sentence reception in noise from one versus two sources: effects of aging and hearing loss. Journal of the Acoustical Society of America 83, 248-256.

Kochkin, S. (1993). MarkeTrak III identifies key factors in determining customer satisfaction. Hearing Journal 46, 39-44.

Nilsson, M., Soli, S.D. & Sullivan, J.A. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95, 1085-1099.

Ricketts, T. (2000a). Directivity quantification in hearing aids: fitting and measurement effects. Ear and Hearing 21, 44-58.

Ricketts, T. (2000b). Impact of noise source configuration on directional hearing aid benefit and performance. Ear and Hearing 21, 194-205.

Ricketts, T. (2001). Directional hearing aids. Trends in Amplification 5, 139-175.

Ricketts, T.  & Henry, P. (2002). Evaluation of an adaptive, directional microphone hearing aid. International Journal of Audiology 41, 100-112.

Ricketts, T. & Henry, P. (2003). Low-frequency gain compensation in directional hearing aids. American Journal of Audiology 11, 1-13.

Ricketts, T. & Mueller, H.G. (2000). Predicting directional hearing aid benefit for individual listeners. Journal the American Academy of Audiology 11, 561-569.

Surr, R.K., Walden, B.E. Cord, M.T. & Olson, L. (2002). Influence of environmental factors on hearing aid microphone preference. Journal of the American Academy of Audiology 13, 308-322.

Valente, M., Fabry, D.A. & Potts, L.G. (1995). Recognition of speech in noise with hearing aids using dual microphones. Journal of the American Academy of Audiology 6, 440-449.

Walden, B.E., Surr, R.K., Cord, M.T. & Dyrlund, O. (2004). Predicting microphone preference in everyday living. Journal of the American Academy of Audiology 15, 365-396.

Are you prescribing an appropriate MPO?

Effect of MPO and Noise Reduction on Speech Recognition in Noise

Kuk, F., Peeters, H., Korhonen, P. & Lau, C. (2010). Effect of MPO and noise reduction on speech recognition in noise. Journal of the American Academy of Audiology, submitted November 2010.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the original authors.

A component of clinical best practice would suggest that clinicians determine a patient’s uncomfortable listening levels in order to prescribe the output limiting characteristics of a hearing aid (Hawkins et al., 1987). The optimal maximum power output (MPO) should be based on two goals: preventing loudness discomfort and avoiding distorted sound quality at high input levels. The upper limit of a prescribed MPO must allow comfortable listening; less consideration is given to the consequences that under prescribing MPO might have on hearing aid and patient performance.

There are two primary concerns related to the acceptable lower MPO limit: saturation and insufficient loudness. Saturation occurs when the input level of a stimulus plus gains applied by the hearing aid exceed the MPO, causing distortion and temporal smearing (Dillon & Storey, 1998). This results in a degradation of speech cues and a perceived lack of clarity, particularly in the presence of competing noise. Similarly, insufficient loudness reduces the availability of speech cues. There are numerous reports of subjective degradation of sound when MPO is set lower than prescribed levels, particularly in linear hearing instruments (Kuk et al., 2008; Storey et al., 1998; Preminger, et al., 2001). There is not yet consensus on whether low MPO levels also cause objective degradation in performance.

The purpose of the study described here was to determine if sub-optimal MPO could affect speech intelligibility in the presence of noise, even in a multi-channel, nonlinear hearing aid. Furthermore, the authors examined if gain reductions from a noise reduction algorithm could mitigate the detrimental effects of the lower MPO. The authors reasoned that a reduction in output at higher input levels, via compression and noise reduction, could reduce saturation and temporal distortion.

Eleven adults with flat, severe hearing losses participated in the reviewed study. Subjects were fitted bilaterally with 15-channel, wide dynamic range compression, behind-the-ear hearing aids. Microphones were set to omnidirectional and other than noise reduction, no special features were activated during the study. Subjects responded to stimuli from the Hearing in Noise Test (HINT, Nilsson et al., 1994) presented at a 0-degree azimuth angle in the presence of continuous speech-shaped noise. The HINT stimuli yielded reception thresholds for speech (RTS) scores for each test condition.

Test conditions included two MPO prescriptions: the default MPO level (Pascoe, 1989) and 10dB below that level. The lower setting was chosen based on previous work that reported an approximately 18dB acceptable MPO range for listeners with severe hearing loss  (Storey et al., 1998). MPOs set at 10dB below default would therefore be likely to approach the low end of the acceptable range, resulting in perceptual consequences. Speech-shaped noise was presented at two levels: 68dB and 75dB. Testing was done with and without digital noise reduction (DNR).

Analysis of the HINT RTS scores yielded significant main effects of MPO and DNR, as well as significant interactions between MPO and DNR, and DNR and noise level. There was no significant difference between the two noise level conditions. Subjects performed better with the default MPO setting versus the reduced MPO setting. The interaction between the MPO and DNR showed that subjects’ performance in the low-MPO condition was less degraded when DNR was activated. These findings support the authors’ hypotheses that reduced MPO can adversely affect speech discrimination and that noise reduction processing can at least partially mitigate these adverse effects.

Prescriptive formulae have proven to be reasonably good predictors of acceptable MPO levels (Storey et al., 1988; Preminger et al., 2001). In contrast, there is some question as to the value of clinical UCL testing prior to fitting, especially when validation with loudness measures is performed after the fitting (Mackersie, 2006). Improper instruction for the UCL task may yield inappropriately low UCL estimates, resulting in compromised performance and sound quality. The authors of the current paper recommend following prescriptive targets for MPO and conducting verification measures after the fitting, such as real-ear saturation response (RESR) and subjective loudness judgments.

Another scenario, and an ultimately avoidable one, involves individuals who have been fitted with inappropriate instruments for their loss, usually because of cosmetic concerns. It is unfortunately not so unusual for some individuals with severe hearing losses to be fitted with RIC or CIC instruments because of their desirable cosmetic characteristics. Smaller receivers will likely have MPOs that are too low for hearing aid users with severe hearing loss. Many hearing-aid users may not realize they are giving anything up when they select a CIC or RIC and may view these styles as equally appropriate options for their loss. The hearing aid selection process must therefore be guided by the clinician; clients should be educated about the benefits and limitations of various hearing aid options and counseled about the adverse effects of under-fitting their loss with a more cosmetically appealing option.

The results of the current study are important because they illuminate an issue related to hearing aid output that might not always be taken into clinical consideration. MPO settings are usually thought of as a way to prevent loudness discomfort, so the concern is to avoid setting the MPO too high. Kuk and his colleagues have shown that an MPO that is too low could also have adverse effects and have provided valuable information to help clinicians select appropriate MPO settings. Additionally, their findings show objective benefits and support the use of noise reduction strategies, particularly for individuals with reduced dynamic range due to severe hearing loss or tolerance issues. Of course their findings may not be generalizable to all multi-channel compression instruments, with the wide variety of compression characteristics that are available, but they present important considerations that should be examined in further detail with other instruments.

References

ANSI (1997). ANSI S3.5-1997. American National Standards methods for the calculation of the speech intelligibility index. American National Standards Institute, New York.

Dillon, H. & Storey, L. (1998). The National Acoustic Laboratories’ procedure for selecting the saturation sound pressure level of hearing aids: theoretical derivation. Ear and Hearing 19(4), 255-266.

Hawkins, D., Walden, B., Montgomery, A. & Prosek, R. (1987). Description and validation of an LDL procedure designed to select SSPL90. Ear and Hearing 8, 162-169.

Kuk , F., Korhonen, P., Baekgaard, L. & Jessen, A. (2008). MPO: A forgotten parameter in hearing aid fitting. Hearing Review 15(6), 34-40.

Kuk et al., (2010). Effect of MPO and noise reduction on speech recognition in noise. Journal of the American Academy of Audiology, submitted November 2010, fast track article.

Kuk, F. & Paludan-Muller, C. (2006). Noise management algorithm may improve speech intelligibility in noise. Hearing Journal 59(4), 62-65.

Mackersie, C. (2006). Hearing aid maximum output and loudness discomfort: are unaided loudness measures needed? Journal of the American Academy of Audiology 18 (6), 504-514.

Nilsson, M., Soli, S. & Sullivan, J. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95(2), 1085-1099.

Pascoe, D. (1989). Clinical measurements of the auditory dynamic range and their relation to formulae for hearing aid gain. In J. Jensen (Ed.), Hearing Aid Fitting: Theoretical and Practical Views. Proceedings of the 13th Danavox Symposium. Copenhagen: Danavox, pp. 129-152.

Preminger, J., Neuman, A. & Cunningham, D. (2001). The selection and validation of output sound pressure level in multichannel hearing aids. Ear and Hearing 22(6), 487-500.

Storey, L., Dillon, H., Yeend, I. & Wigney, D. (1998). The National Acoustic Laboratories, procedure for selecting the saturation sound pressure level of hearing aids: experimental validation. Ear and Hearing 19(4), 267-279.

Addressing patient complaints when fine-tuning a hearing aid

Jenstad, L.M., Van Tasell, D.J. & Ewert, C. (2003). Hearing aid troubleshooting based on patient’s descriptions. Journal of the American Academy of Audiology 14 (7).

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

As part of any clinically robust protocol, a hearing aid fitting will be objectively verified with real-ear measures and validated with a speech-in-noise test. Fine tuning and follow-up adjustments are an equally important part of the fitting process. This stage of the routine fitting process does not follow standardized procedures and is almost always directed by a patient’s complaints or descriptions of real-world experience with the hearing aids. This can be a challenging dynamic for the clinician. Patients may have difficulty putting their auditory experience into words and different people may describe similar sound quality issues in different ways.  Additionally, there may be several ways to address any given complaint and a given programming adjustment may not have the same effect on different hearing aids.

Hearing aid manufacturers often include a fine-tuning guide or automated fitting assistant within their software to help the clinician make appropriate adjustments for common patient complaints. There are limitations to the effectiveness of these fine tuning guides in that they are inherently specific to a limited range of products and the suggested adjustments are subject to the expertise and resources of that manufacturer.  The manner in which a sound quality complaint is described may differ between manufacturers and the recommended adjustments in response to the complaint may differ as well.

There have been a number of efforts to develop a single hearing aid troubleshooting guide that could be used across devices and manufacturers (Moore et. al., 1998; Gabrielsson et al., 1979, 1988, 1990; Lundberg et al., 1992; Ovegard et al., 1997). The first and perhaps most challenging step toward this goal has been to determine the most common descriptors that patients use for sound quality complaints. Moore (1998) and his colleagues developed a procedure in which responses on three ratings scales (e.g., “loud versus quiet”, “tinny versus boomy”) were used to make adjustments to gain and compression settings. However, their procedure did not allow for the bevy of descriptors that patients create; limiting potential utility for everyday clinical settings.  Gabrielsson colleagues, in a series of Swedish studies, developed a set of reliable terms to describe sound quality. These descriptors have since been translated and used in English language research (Bentler et al., 1993).

As hearing instruments become more complicated with numerous adjustable parameters, and given the wide range of experience and expertise of individuals fitting hearing instruments today, an independent fine tuning guide is an appealing concept. Lorienne Jenstad and her colleagues proposed an “expert system” for troubleshooting hearing aid complaints.  The authors explained that expert systems “emulate the decision making abilities of human experts” (Tharpe et al., 1993).  To develop the system, two primary questions were asked:

1) What terms do hearing impaired listeners use to describe their reactions to specific hearing aid fitting problems?

2) What is the expert consensus on how these patient complaints can be addressed by hearing aid adjustment?

There were two phases to the project. To address question one, the authors surveyed clinicians for their reports on how patients describe sound quality with regard to specific fitting problems. To address question two, the most frequently reported descriptors from the clinicians’ responses were submitted to a panel of experts to determine how they would address the complaints.

The authors sent surveys to 1934 American Academy of Audiology members and received 311 qualifying responses. The surveys listed 18 open-ended questions designed to elicit descriptive terms that patients would likely use for hearing aid fitting problems. For example, the question “If the fitting has too much low-frequency gain…” yielded responses such as “hollow”, “plugged” and “echo”.  The questions probed common problems related to gain, maximum output, compression, physical fit, distortion and feedback.  The survey responses yielded a list of the 40 most frequent descriptors of hearing aid fitting problems, ranked according to the number of occurrences.

The list of descriptors was used to develop a questionnaire to probe potential solutions for each problem.  Each descriptor was put in the context of, “How would you change the fitting if your patient reports that ___?”, and 23 possible fitting solutions were listed.  These questionnaires were completed by a panel of experts with a minimum of five years of clinical experience. Respondents could offer more than one solution to a problem and the solutions were weighted based on the order in which they were offered. There was strong agreement among experts, suggesting that their responses could be used reliably to provide troubleshooting solutions based on sound quality descriptions. The expert responses also agreed with the initial survey that was sent to the group of 1934 audiologists, supporting the validity of these response sets.

The expert responses resulted in a fine-tuning guide in the form of tables or simplified flow charts. The charts list individual descriptors with potential solutions listed below in the order in which they should be attempted.  For example, below the descriptor “My ear feels plugged”, the first solution is to “increase vent” and the second is to “decrease low frequency gain”. The idea is that the clinician would first try to increase the vent diameter and if that didn’t solve the problem, they would move on to the second option, decreasing low frequency gain. If an attempted solution creates another sound quality problem, the table can be utilized to address that problem in the same way.

The authors correctly point out that there are limitations to this tool and that proposed solutions will not necessarily have the same results with all hearing aids. For instance, depending on the compressor characteristics, raising a kneepoint might increase OR decrease the gain at input levels below the kneepoint. It is up to the clinician to be familiar with a given hearing aid and its adjustable parameters to arrive at the appropriate course of action.

Beyond manipulation of the hearing aid itself, the optimal solution for a particular patient complaint might not be the first recommendation in any tuning guide. For instance, for the fitting problem labeled “Hearing aid is whistling”, the fourth solution listed in the table is “check for cerumen”.  This solution appeared fourth in the ranking based on the frequency of responses from the experts on the panel. However, any competent clinician who encounters a patient with hearing aid feedback should check for cerumen first before considering programming modifications.

The expert system proposed by Jenstad and her colleagues represents a thoroughly examined, reliable step toward development of a universal troubleshooting guide for clinicians. Their paper was published in 2003, so some items should be updated to suit modern hearing aids. For example, current feedback management strategies result in fewer and less challenging feedback problems.  Solutions for feedback complaints might now include, “calibrate feedback management system” versus gain or vent adjustments. Similarly, most hearing aids now have solutions for listening in noise that extend beyond the simple inclusion of directional microphones, so “directional microphone” might not be an appropriately descriptive solution to address complaints about hearing in noise, as the patient is probably already using a directional microphone.

Overall, the expert system proposed by Jenstad and colleagues is a helpful clinical tool; especially if positioned as a guide to help patients find the appropriate terms to describe their perceptions. However, as the authors point out, it is not meant to replace prescriptive methods, measures of verification and validation, or the expertise of the audiologist. The responsibility is with the clinician to be informed about current technology and its implications for real world hearing aid performance and to communicate with their patients in enough detail to understand their patients’ comments and address them appropriately.

References

Bentler, R.A., Nieburh, D.P., Getta, J.P. & Anderson, C.V. ( 1993). Longitudinal study of hearing aid effectiveness II: subjective measures. Journal of Speech and Hearing Research 36, 820-831.

Jenstad, L.M., Van Tasell, D.J. & Ewert, C. (2003). Hearing aid troubleshooting based on patient’s descriptions. Journal of the American Academy of Audiology 14 (7).

Moore, B.C.J., Alcantara, J.I. & Glasberg, B.R. (1998). Development and evaluation of a procedure for fitting multi-channel compression hearing aids. British Journal of Audiology 32, 177-195.

Gabrielsson A. ( 1979). Dimension analyses of perceived sound quality of sound-reproducing systems. Scandinavian Journal of Psychology 20, 159-169.

Gabrielsson, A., Hagerman, B., Bech-Kristensen, T. & Lundberg, G. (1990). Perceived sound quality of reproductions with different frequency responses and sound levels. Journal of the Acoustical Society of America 88, 1359-1366.

Gabrielsson, A. Schenkman, B.N. & Hagerman, B. (1988). The effects of different frequency responses on sound quality judgments and speech intelligibility. Journal of Speech and Hearing Research 31, 166-177.

Lundberg, G., Ovegard, A., Hagerman, B., Gabrielsson, A. & Brandstom, U. (1992). Perceived sound quality in a hearing aid with vented and closed earmold equalized in frequency response. Scandinavian Audiology 21, 87-92.

Ovegard, A., Lundberg, G., Hagerman, B., Gabrielsson, A., Bengtsson, M. & Brandstrom, U. (1997). Sound quality judgments during acclimatization of hearing aids. Scandinavian Audiology 26, 43-51.

Schweitzer, C., Mortz, M. & Vaughan, N. (1999). Perhaps not by prescription – but by perception. High Performance Hearing Solutions 3, 58-62.

Tharpe, A.M., Biswas, G. & Hall, J.W. (1993). Development of an expert system for pediatric auditory brainstem response interpretation. Journal of the American Academy of Audiology 4, 163-171.

Recommendations for fitting patients with cochlear dead regions

Cochlear Dead Regions in Typical Hearing Aid Candidates:

Prevalence and Implications for Use of High-Frequency Speech Cues

Cox, R.M., Alexander, G.C., Johnson, J. & Rivera, I. (2011).  Cochlear dead regions in typical hearing aid candidates: Prevalence and implications for use of high-frequency speech cues. Ear & Hearing 32 (3), 339-348.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Audibility is a well-known predictor of speech recognition ability (Humes, 2007) and audibility of high-frequency information is of particular importance for consonant identification.  Therefore, audibility of high-frequency speech cues is appropriately regarded as an important element of successful hearing aid fittings (Killion & Tillman, 1982; Skinner & Miller, 1983). In contrast to this expectation, some studies have reported that high-frequency gain might have limited or even negative impact on speech recognition abilities of some individuals (Murray & Byrne, 1986; Ching et al., 1998; Hogan & Turner, 1998). These researchers observed that when high-frequency hearing loss exceeded 55-60dB, some listeners were unable to benefit from increased high-frequency audibility.  A potential explanation for this variability was provided by Brian Moore (2001), who suggested that an inability to benefit from amplification in a particular frequency region could be due to cochlear “dead regions” or regions where there is a loss of inner hair cell functioning.

Moore suggested that hearing aid fittings could potentially be improved if clinicians were able to identify patients with cochlear dead regions (DRs). Working under the assumption that diagnosis DRs may contraindicate high-frequency amplification. He and his colleagues developed the TEN test as a method of determining the presence of cochlear dead regions (Moore et al., 2000, 2004). The advent of the TEN test provided a standardized measurement protocol for DRs, but there is still wide variability in the reported prevalence of DRs. Estimates range from as 29% (Preminger et a., 2005) to as high as 84% (Hornsby & Dundas, 2009), with other studies reporting DR prevalence somewhere in the middle of that range. Several potential factors are likely to contribute to this variability, including degree of hearing loss, audiometric configuration and test technique.

In addition to the variability in reported prevalence of DRs, there is also variability in the reports of how DRs affect the ability to benefit from high-frequency speech cues (Vickers et al., 2001; Baer et al., 2002; Mackersie et al., 2004). It remains unclear as to whether high-frequency amplification recommendations should be modified to reflect the presence of DRs.  Most research is in agreement that as hearing thresholds increase, the likelihood of DRs also increases.  Hearing aid users with severe to profound hearing losses are likely to have at least one DR. Because a large proportion of hearing aid users have moderate to severe hearing losses, Dr. Cox and her colleagues wanted to determine the prevalence of DRs in this population. In addition, they examined the effect of DRs on the use of high-frequency speech cues by individuals with moderate to severe loss.

Their study addressed two primary questions:

1) What is the prevalence of dead regions (DRs) among listeners with hearing thresholds in the 60-90dB range?

2) For individuals with hearing loss in the 60-90dB range, do those with DRs differ from those without DRs in their ability to use high-frequency speech cues?

One hundred and seventy adults with bilateral, flat or sloping sensorineural hearing loss were tested. All subjects had thresholds of 60 to 90dB in the better ear for at least part of the range from 1-3kHz and thresholds no better than 25dB for frequencies below 1kHz. Subjects ranged in age from 38 to 96 years, and 59% of the subjects had experience with hearing aids.

First, subjects were evaluated for the presence of DRs with the TEN test. Then, speech recognition was measured using high-frequency emphasis (HFE) and high-frequency emphasis, low-pass filtered (HFE-LP) stimuli from the QSIN test (Killion et al. 2004). HFE items on this test are amplified up to 32dB above 2.5kHz, whereas the HFE-LP items have much less gain in this range. Comparison of subjects’ responses to these two types of stimuli allowed the investigators to assess changes in speech intelligibility with additional high frequency cues. Presentation levels for the QSIN were chosen by using a loudness scale and bracketing procedure to arrive at a level that the subject considered “loud but okay”. Finally, audibility differences for the two QSIN conditions were estimated using the Speech Intelligibility Index based on ANSI 3.5-1997 (ANSI, 1997).

The TEN test results revealed that 31% of the participants had DRs at one or more test frequencies. Of the 307 ears tested, 23% were found to have a DR for one or more frequencies. Among those who tested positive for DRs, about 1/3 had DRs in both ears and 2/3 had DRs in one ear or the other in equal proportion. Mean audiometric thresholds were essentially identical for the two groups below 1kHz, but above 1kHz thresholds were significantly poorer for the group with DRs than for the group without DRs.  DRs were most prevalent at frequencies above 1.5kHz. There were no age or gender differences.

On the QSIN test, the mean HFE-LP scores were significantly poorer than the mean HFE scores for both groups.  There was also a significant difference in performance based on whether or not the participants had DRs. Perhaps more interestingly, there was a significant interaction between the DR group and test stimuli conditions, in that the additional high-frequency information in the HFE stimuli resulted in slightly greater performance gains for the group without DRs than it did for the group with DRs.  Furthermore, subjects with one or more isolated DRs were more able to benefit from the high frequency cues in the HFE lists than were those subjects with multiple, contiguous DRs. Although there were a few isolated individuals who demonstrated lower scores for the HFE stimuli, the differences were not significant and could have been explained by measurement error. Therefore, the authors conclude that the additional high frequency information in the HFE stimuli was not likely to have had a detrimental effect on performance for these individuals.

As had also been reported in previous studies, subject groups with DRs had poorer mean audiometric thresholds than the groups without DRs, so it was possible that audibility played a role in QSIN performance. Analysis of the audibility of QSIN stimuli for the two groups revealed that high frequency cues in the HFE lists were indeed more audible for the group without DRs. In accounting for this audibility effect, the presence of DRs still had a small but significant effect on performance.

The results of this study suggest that listeners with cochlear DRs still benefit from high frequency speech cues, albeit slightly less than those without dead regions.  The performance improvements were small and the authors caution that it is premature to draw firm conclusions about the clinical implications of this study.  Despite the need for further examination, the results of the current study certainly do not support any reduction in prescribed gain for hearing aid candidates with moderate to severe hearing losses.  The authors acknowledge, however, that because the findings of this and other studies are based on group data, it is possible that specific individuals may be negatively affected by amplification within dead regions. Based on the research to date, this seems more likely to occur in individuals with profound hearing loss who may have multiple, contiguous DRs.

More study is needed to determine the most effective clinical approach to managing cochlear dead regions in hearing aid candidates. Future research should be done with hearing aid users, including for example, the effects of noise on everyday hearing aid performance for individuals with DRs. A study by Mackersie et. al. (2004) showed that subjects with DRs suffered more negatives effects of noise than did the subjects without DRs. If there is a convergence of evidence to this effect, then recommendations about the use of high frequency gain, directionality and noise reduction could be determined as they relate to DRs. For now, Dr. Cox and her colleagues recommend that until there are clear criteria to identify individuals for whom high frequency gain could have deleterious effects, clinicians should continue using best-practice protocols and provide high frequency gain according to current prescriptive methods.

References

ANSI ( 1997). American National Standard Methods for Calculation of the Speech Intelligibility Index (Vol. ANSI S3.5-1997). New York: American National Standards Institute.

Ching,T., Dillon, H. & Byrne, D. (1998). Speech recognition of hearing-impaired listeners: Predictions from audibility and the limited role of high-frequency amplification. Journal of the Acoustical Society of America 103, 1128-1140.

Cox, R.M., Alexander, G.C., Johnson, J. & Rivera, I. (2011).  Cochlear dead regions in typical hearing aid candidates: Prevalence and implications for use of high-frequency speech cues. Ear & Hearing 32 (3), 339-348.

Hogan, C.A. & Turner, C.W. (1998). High-frequency audibility: Benefits for hearing-impaired listeners. Journal of the Acoustical Society of America 104, 432-441.

Humes, L.E. (2007). The contributions of audibility and cognitive factors to the benefit provided by amplified speech to older adults. Journal of the American Academy of Audiology 18, 590-603.

Killion, M. C. & Tillman, T.W. (1982). Evaluation of high-fidelity hearing aids. Journal of Speech and Hearing Research 25, 15-25.

Moore, B.C.J. (2001). Dead regions in the cochlear: Diagnosis, perceptual consequences and implications for the fitting of hearing aids. Trends in Amplification 5, 1-34.

Moore, B.C.J., Huss, M., Vickers, D.A.,  et al. (2000). A test for the diagnosis of dead regions in the cochlea. British Journal of Audiology 34, 2-5-224.

Moore, B.C.J., Glasberg, B.R., Stone, M.A. (2004). New version of the TEN test with calibrations in dB HL. Ear and Hearing 25, 478-487.

Murray, N. & Byrne, D. (1986). Performance of hearing-impaired and normal hearing listeners with various high-frequency cut-offs in hearing aids. Australian Journal of Audiology 8, 21-28.

Skinner, M.W. & Miller, J.D. (1983). Amplification bandwidth and intelligibility of speech in quiet and noise for listeners with sensorineural hearing loss.  Audiology 22, 253-279.

A preferred speech stimulus for testing hearing aids

Development and Analysis of an International Speech Test Signal (ISTS)

Holube, I., Fredelake, S., Vlaming, M. & Kollmeier, B. (2010). Development and analysis of an international speech test signal (ISTS). International Journal of Audiology, 49, 891-903.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Current hearing aid functional verification measures are described in the standards IEC 60118 and ANSI 3.22 and use stationary signals, including sine wave frequency sweeps and unmodulated noise signals. Test stimuli are presented to the hearing instrument and frequency specific gain and output is measured in a coupler or ear simulator.  Current standardized measurement methods require the instrument to be set at maximum or a reference test setting and adaptive parameters such as noise reduction and feedback management are turned off.

These procedures provide helpful information for quality assurance and determining fitting ranges for specific hearing aid models. However, because they were designed for linear, time-invariant hearing instruments, they have limitations for today’s nonlinear, adaptive instruments and cannot provide meaningful information about real-life performance in the presence of dynamically changing acoustic environments.

Speech is the most important stimulus encountered by hearing aid users and nonlinear hearing aids with adaptive characteristics process speech differently than they do stationary signals like sine waves and unmodulated noise. Therefore, it seems preferable for standardized test procedures to use stimuli that are as close as possible to natural speech.  Indeed, there are some hearing aid test protocols that use samples of natural speech or live speech. But natural speech stimuli will have different spectra, fundamental frequencies, and temporal characteristics depending on the speaker, the source material and the language. For hearing aid verification measures to be comparable to each other it is necessary to have standardized stimuli that can be used internationally.

Alternative test stimuli have been proposed based on the long-term average speech spectrum (Byrne et al., 1994) or temporal envelope fluctuations (Fastl, 1987). The International Collegium for Rehabilitative Audiology (ICRA) developed a set of stimuli (Dreschler, 2001) that reflect the long-term average speech spectrum and have speech-like modulations that differ across frequency bands.  ICRA stimuli have advantages over modulated noise and sine wave stimuli in that they share some similar characteristics with speech, but they lack speech-like comodulation characteristics (e.g., fundamental frequency). Furthermore, ICRA stimuli are often classified by signal processing algorithms as “noise” rather than “speech”, so they are less than optimal for measuring how hearing aids process speech.

The European Hearing Instrument Manufacturers Association (EHIMA) is developing a new measurement procedure for nonlinear, adaptive hearing instruments and an important part of their initiative is development of a standardized test signal or International Speech Test Signal (ISTS).  The development and analysis of the ISTS was described in a paper by Holube, et al. (2010).

There were fifteen articulated requirements for the ISTS, based on available test signals and knowledge of natural speech, the most clinically salient of which are:

  • The ISTS should resemble normal speech but should be non-intelligible.
  • The ISTS should be based on six major languages, representing a wide range of phonological structures and fundamental frequency variations.
  • The ISTS should be based on female speech and should deviate from the international long-term average speech spectrum (ILTASS) for females by no more than 1dB.
  • The ISTS should have a bandwidth of 100 to 16,000Hz and an overall RMS level of 65dB.
  • The dynamic range should be speech-like and comparable to published values for speech (Cox et al., 1988; Byrne et al., 1994).
  • The ISTS should contain voiced and voiceless components. Voiced components should have a fundamental frequency characteristic of female speech.
  • The ISTS should have short-term spectral variations similar to speech (e.g., formant transitions).
  • The ISTS should have modulation characteristics similar to speech (Plomp, 1984).
  • The ISTS should contain short pauses similar to natural running speech.
  • The ISTS stimulus should have a 60 second duration, from which other durations can be derived.
  • The stimulus should allow for accurate and reproducible measurements regardless of signal duration.

Twenty-one female speakers of six different languages (American English, Arabic, Mandarin, French, German and Spanish) were recorded while reading a story, the text and translations of which came from the Handbook of the International Phonetic Association (IPA).  One recording from each language was selected based on a number of criteria including voice quality, naturalness and median fundamental frequency. The recordings were filtered to meet the ILTASS characteristics described by Byrne et al. (1994) and were then split into 500ms segments that roughly corresponded to individual syllables. These syllable-length segments were attached in pseudo-random order to generate sections of 10 or 15 milliseconds. Each of the resulting sections could be combined to generate different durations of the ISTS stimulus and no single language was used more than once in any 6-segment section.  Speech interval and pause durations were analyzed to ensure that ISTS characteristics would closely resemble natural speech patterns.

For analysis purposes, a 60-second ISTS stimulus was created by concatenation of 10- and 15-second sections.  This ISTS stimulus was measured and compared to natural speech and ICRA-5 stimuli based on several criteria:

  • Long-term average speech spectrum (LTASS)
  • Short term spectrum
  • Fundamental frequency
  • Proportion of voiceless segments
  • Band-specific modulation spectra
  • Comodulation characteristics
  • Pause and speech duration
  • Dynamic range (spectral power level distribution)

On all of the analysis criteria, the ISTS stimulus resembled natural speech stimuli as well or better than ICRA-5 stimuli. Notable improvements for the ISTS over the ICRA-5 stimulus were its comodulation characteristics and dynamic range of 20-30dB, as well as pauses and combinations of voiced and voiceless segments that more closely resembled the distributions in natural speech.  Overall, the ISTS was deemed an appropriate speech-like stimulus proposal for the new standard measurement protocol.

Following the detailed analysis, the ISTS stimulus was used to measure four different hearing instruments, which were programmed to fit a flat, sensorineural hearing loss of 60dBHL.  Each instrument was nonlinear with adaptive noise reduction, compression and feedback management characteristics. The first-fit algorithms from each manufacturer were used, with all microphones fixed to an omnidirectional mode.  Instead of yielding gain and output measurements across frequency for one input level, the results showed percentile dependent gain (99th, 65th and 30th) across frequency as referenced to the long-term average speech spectrum.  The percentile dependent gain values provided information about nonlinearity, in that the softer components of speech were represented by the 30th percentile, moderate and loud speech components were represented by the 65th and 99th percentiles, respectively.  Relations between these three percentiles represented the differences in gain for soft, moderate and loud sounds.

The measurement technique described by Holube and colleagues, using the ISTS stimulus, offers significant advantages over current measurement protocols with standard sine wave or noise stimuli. First and perhaps most importantly, it allows hearing instruments to be programmed to real-life settings with adaptive signal processing features active. It measures how hearing aids process a stimulus that very closely resembles natural speech, so clinical verification measures may provide more meaningful information about everyday performance. By showing changes in percentile gain values across frequency, it also allows compression effects to be directly visible and may be used to evaluate noise reduction algorithms as well. The authors also note that the acoustic resemblance of ISTS to speech with its lack of linguistic information may have additional applications for diagnostic testing, telecommunications or communication acoustics.

The ISTS is currently available in some probe microphone equipment and will likely be introduced in most commercially available equipment over the next few years. Its introduction brings a standardized speech stimulus, for the testing of hearing aids, to the clinic. An important component of clinical best practice is the measurement of a hearing aid’s response characteristics. This is most easily accomplished through insitu probe microphone measurement in combination with a speech test stimulus such as the ISTS.

References

American National Standards Institute (ANSI ). ANSI S3.22-2003. Specification of hearing aid characteristics. New York: Acoustical Society of America.

Byrne, D., Dillon, H., Tran, K., Arlinger, S. & Wibraham, K. (1994). An international comparison of long0term average speech spectra. Journal of the Acoustical Society of America, 96(4), 2108-2120.

Cox, R.M., Matesich, J.S. & Moore, J.N. (1988). Distribution of short-term rms levels in conversational speech. Journal of the Acoustical Society of America, 84(3), 1100-1104.

Dreschler, W.A., Verschuure, H., Ludvigsen, C. & Westerman, S. (2001). ICRA noises: Artificial noise signals with speech-like spectral and temporal properties for hearing aid assessment. Audiology, 40, 148-157.

Fastl, H. (1987). Ein Storgerausch fur die Sprachaudiometrie. Audiologische Akustik, 26, 2-13.

Holube, I., Fredelake, S., Vlaming, M. & Kollmeier, B. (2010). Development and analysis of an international speech test signal (ISTS). International Journal of Audiology, 49, 891-903.

International Electrotechnical Commission, 1994, IEC 60118-0. Hearing Aids: Measurement of electroacoustical characteristics, Bureau of the International Electrotechnical Commission, Geneva, Switzerland.

IPA, 1999. Handbook of the International Phonetic Association. Cambridge University Press.

Plomp, R. (1984). Perception of speech as a modulated signal. In M.P.R. van den Broeche, A. Cohen (eds), Proceedings of the 10th International Congress of Phonetic Sciences, Utrecht, Dordrecht: Foris Publications, 29-40.

 

 

 

Understanding the best listening configurations for telephone use when wearing hearing aids

Understanding the best listening configurations for telephone use when wearing hearing aids

Picou, E.M. & Ricketts, T.A. (2010) Comparison of wireless and acoustic hearing aid based telephone listening strategies. Ear and Hearing 31(6), 1-12.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Telephone use is an important consideration for hearing aid users. It is often challenging to arrive at the appropriate coupling method to the ear and related hearing aid settings. Many people with hearing loss have difficulty hearing on the telephone and concerns about telephone use may result in reluctance to purchase new hearing aids or to use aids that have already been purchased (Kochkin, 2000).  Indeed, in a survey of hearing aid satisfaction, one in five respondents reported dissatisfaction when using the telephone with a hearing aid (Kochkin, 2005).

There are a number of factors that affect a hearing aid user’s ability to hear on the phone, including lack of visual cues, reduced bandwidth, background noise and difficulty coupling the phone to the hearing aid. The lack of visual cues has been addressed recently with videoconferencing applications, but these are not commonly used, especially among older individuals. The reduced bandwidth (approximately 300 to 3,300 Hz) is characteristic of sound transmission over the phone, so there is little an individual can do improve the availability of high frequency speech cues over the phone. Background noise and coupling issues can be addressed in a number of ways, depending on the individual and the circumstances.

There are two ways a hearing aid can be coupled directly to the telephone; acoustically and with an inductive telecoil  or with acoustic settings that focus on the telephone’s limited frequency range. A drawback to the acoustic setting is that the hearing aid microphone is active which may result in feedback (Latzel et. al., 2001; Palmer, 2001; Chung, 2004).  Despite recent improvements in feedback control, this remains a problem, especially for those with severe hearing loss whose hearing aids require more gain.  Additionally, the microphone picks up environmental noise that competes with the telephone signal, decreasing the signal to noise ratio.  Telecoils can be a solution for feedback and poor signal to noise ratios, but they are subject to interference from fluorescent lights, computer equipment and power lines.  Furthermore, it can be difficult to determine the proper positioning of the phone for optimal sound quality, as the telephone receiver must be placed as close to the telecoil as possible (Tannahill, 1983; Compton, 1994; Yanz & Preves, 2003).

A recent option for telephone is through the use of intermediate wireless accessories, these route sound from the phone to the hearing aids via a combination of Bluetooth and a direct-to-hearing aid wireless technology. These devices address the problems with acoustic or telecoil coupling, and have the possibility of providing some additional benefit if the telephone signal is bilaterally routed (Green, 1976; Moore, 1998; Hall et al, 1984; Quaranta and Cervellera, 1974).  Many hearing aid manufacturers offer wireless devices, but it is unclear whether their use results in significantly improved speech recognition over the phone. Even with wireless routing of the phone signal, there may still be detrimental effects of background noise, especially for individuals with open-canal hearing aids (Dillon, 1985; 1991).

The purpose of Picou and Ricketts’ study was to examine speech recognition performance with monaural and binaural wireless phone transmission, as well as a monaural acoustic condition, in the presence of two levels of background noise. They also evaluated performance with occluding versus non-occluding domes.

Twenty individuals with sloping, high-frequency, sensorineural hearing loss participated in the study. Subjects were fitted with binaural, receiver-in-canal hearing instruments with a wireless transmitter accessory. Half of the subjects were tested with open, non-occluding domes and half were tested with closed, occluding domes.

A total of seven hearing aid and telephone configurations were tested in two background noise levels (55dBA and 65dBA). Subjects responded to sentences from the Connected Speech Test (CST, Cox et al., 1987).  Speech stimuli were band pass filtered from 300 to 3400Hz to simulate telephone transmission and presented at 65dB.  Competing speech babble was presented through four loudspeakers positioned around the listener at a distance of 1 meter. All test conditions – hearing aid condition, dome type, noise level – were counterbalanced to avoid effects of learning and fatigue.

This study illuminates some important considerations in telephone use and supports the use of wireless telephone accessories, especially with bilateral routing.  The participants subjects performed best with external hearing aid microphones turned off, but the authors acknowledge that for safety and monitoring of environmental sounds, it may be advisable to leave microphones active at an attenuated level. The authors suggest that further investigation is warranted to determine optimal levels of microphone attenuation to allow for successful speech recognition over the phone, while preserving environmental awareness.

Performance with occluding domes was better than open domes for wireless telephone signal routing in noise. Occluding domes reduce the environmental noise entering the ear canal, providing an improvement in signal to noise ratio. In the acoustic phone condition, open domes performed better than occluding domes. Subjects tended to position the phone directly over the ear canal which likely improved signal to noise ratio by blocking background noise and isolating the speech transmitted from the phone.

Specific observations were made for participants wearing open-canal hearing aids. Specifically, users with open domes should be instructed to hold the phone directly over the ear canal for optimal speech recognition. Programming adjustments may be necessary to increase availability of low and mid-frequency speech cues and improve signal to noise ratio.  Conversely, users with occluding domes should be advised of the potential limitations of direct acoustic coupling to the phone and should be instructed to hold the phone receiver as close to the microphone as possible. Alternatively, patients with occluding domes may be better off using a telecoil, if available, for situations in which they cannot use a wireless device.

Interestingly, the no significant improvement in speech recognition resulted from plugging the non-test ear or muting the hearing aid on the non-test ear.  This is consistent with previous research on masking level differences for tones (Green, 1976; Moore 1998) as well as a previous study of speech recognition over the phone, which found no improvement for normal-hearing listeners when the non-phone ear was plugged.  This is inconsistent, however, with reported preferences of hearing aid users.  Despite the lack of improvement in the current study, the authors acknowledged that muting the hearing aid on the non-phone ear may reduce listening effort, which is therefore preferred by the listener.

For users of wireless accessories, the results of this study clearly indicate that binaural routing is ideal. But for hearing aid users who do not have wireless devices, the optimal hearing aid settings and coupling method may depend on several factors. The extent of venting or openness should be considered when choosing an acoustic phone coupling; individuals with minimal venting may not hear well unless they are able to hold the telephone over the hearing aid microphone, while patients with open fittings may experience more challenges with background noise interference than the more occluded wearer.

Regardless of whether a client uses an intermediate wireless device for binaural telephone streaming, monaural acoustic listening or telecoil coupling, the attenuation level of the hearing aid microphones is also a consideration. For binaural wireless routing or streaming it is advisable to keep both hearing aid microphones active but attenuated, to preserve awareness of environmental sounds. For monaural acoustic/telecoil combinations the microphone level on the opposite ear can be attenuated slightly to allow environmental awareness but reduce distraction from surrounding noise. As noted earlier, further study is warranted to determine optimal microphone attenuation levels.

References

Chung, K. (2004). Challenges and recent developments in hearing aids. Part II. Feedback and occlusion effect reduction strategies, laser shell manufacturing processes and other signal processing technologies. Trends in Amplification 8, 125-164.

Compton, C. (1994). Providing effective telecoil performance with in-the-ear hearing instruments. Hearing Journal 47, 23-26.

Cox, R.M., Alexander, G.C. & Gilmore, C.A. (1987). Development of the connected speech test (CST). Ear and Hearing, 8 (supplement): 119S-126S.

Dillon, H. (1985). Earmolds and high frequency response modification. Hearing Instruments 36, 8-12.

Dillon, H. (1991). Allowing for real ear venting effects when selecting the coupler gain of hearing aids. Ear and Hearing 12(6), 406-416.

Green, D.M. (1976). An Introduction to Hearing. Hillsdale, NJ: Lawrence Erlbaum Associates.

Hall, J.W., Tyler, R.S., Fernandes, M.A. (1984). Factors influencing the masking level difference in cochlear hearing-impaired and normal-hearing listeners. Journal of Speech and Hearing Research 27, 145-154.

Hawkins, D.B. (1984). Comparisons of speech recognition in noise by mildly-to-moderately hearing-impaired children using hearing aids and FM systems. Journal of Speech and Hearing Disorders 49, 409-418.

Kochkin, S. (2000). MarkeTrak V: “Why my hearing aids are in the drawer”: The consumers’ perspective. Hearing Journal 53, 34-42.

Kochkin, S. (2005). MarkeTrak VII: Customer satisfaction with hearing aids in the digital age. Hearing Journal 58, 30-39.

Latzel, M., Gebhart, T.M. & Kiessling, J. (2001). Benefit of a digital feedback suppression system for acoustical telephone communication. Scandanavian Audiology Supplementum 52, 69-72.

Moore, B.C.J. (1998). Cochlear Hearing Loss. London: Whurr Publishers.

Palmer, C.V. (2001). Ring, ring! Is anybody there? Telephone solutions for hearing aid users. Hearing Journal 54, 10.

Picou, E.M. & Ricketts, T.A. (2010) Comparison of wireless and acoustic hearing aid based telephone listening strategies. Ear and Hearing 31(6), 1-12.

Quaranta, A. & Cervellera, G. (1974). Masking level difference in normal and pathological ears. Audiology 13, 428-431.

Tannahill, J.C. (1983). Performance characteristics for hearing aid microphone versus telephone and telephone/telecoil reception modes. Journal of Speech and Hearing Research 26, 195-201.

Yanz, J.L. & Preves, D. (2003). Telecoils: Principles, pitfalls, fixes and the future. Seminars in Hearing 24, 29-41.

 

 

Understanding the benefits of bilateral hearing aids

A Prospective Multi-Centre Study of the Benefits of Bilateral Hearing Aids

Boymans, M., Goverts, S.T., Kramer, S.E., Festen, J.M. & Dreschler, W.A. (2008). A prospective multi-centre study of the benefits of bilateral hearing aids. Ear and Hearing 29(6), 930-941.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

The benefits of binaural amplification are generally well established and include improved speech discrimination in noise (Hawkins and Yacullo, 1984; Kobler & Rosenhall, 2002), improved localization of sound sources (Dreschler & Boymans, 1994; Punch et al, 1991) perception of balanced hearing, improved speech clarity (Chung & Stephens, 1986; Erdman & Sedge, 1981) and reduced listening  effort (Noble, 2006). However, some studies have shown either little subjective difference between unilateral and bilateral amplification (Andersson et al, 1996) or even a subjective preference for unilateral hearing aids, especially in noise (Walden & Walden, 2005; Schreurs & Olsen, 1985).

The authors of the current study sought to confirm subjective evaluations of binaural hearing aids with objective, functional tests of localization and speech discrimination in noise. They also examined three diagnostic measures to determine their potential as predictors of binaural success.

Two hundred fourteen hearing-impaired subjects were recruited from eight audiology clinics in the Netherlands. Participant inclusion criteria were limited only to  participants who were native Dutch speakers and were physically able to complete the test procedures, with no contraindications for binaural hearing aid fitting. Therefore, individual characteristics varied widely with regard to prior hearing aid use, hearing aid style and circuitry, age and degree of hearing loss. Ten participants with normal hearing were also tested for reference purposes.

Prior to hearing aid fitting, in addition to basic diagnostic audiometry, participants completed three tests that were chosen as potential predictors of binaural benefit:

1. Interaural time differences.

2. Binaural masking level differences.

3. Speech reception thresholds in background noise.

Following the hearing aid fittings, functional binaural benefit was evaluated and questionnaires were administered to obtain subjective responses to unilateral and bilateral fittings. Three assessment tools were used:

1. Speech intelligibility in background noise with spatial separation of speech and noise.

2. Horizontal localization of everyday sounds.

3. Subjective questionnaires to examine differences between unaided, unilateral, and bilateral conditions for detection of sounds, discrimination of sounds, speech intelligibility in quiet and noise, localization, and comfort of loud sounds.

Not surprisingly, on all three diagnostic measures, normal hearing participants performed significantly better than hearing-impaired participants. There was a great deal of inter-participant variability within the hearing-impaired group.

On the functional test of speech intelligibility with spatially separated speech and noise, bilateral hearing aid users performed significantly better than unilateral hearing aid users. Improvements were noted for conditions in which competing sounds were presented ipsilateral and contralateral to the speech stimulus.  On the localization test, bilateral hearing instrument wearers again performed significantly better than unilateral hearing aid wearers.  Subjective questionnaires showed that unilateral hearing aid use was favored over unaided conditions for all categories except comfort of loud sounds. Similarly, bilateral hearing aid use was favored over unilateral for all categories except comfort of loud sounds.  This finding is in agreement with previous work by the lead author of the current study (Boymans, 2003).

Participants were asked to provide reasons why they preferred one or two hearing aids. The most common reason for preferring a unilateral fitting was that the user’s own voice was more pleasant with one hearing aid. For preferred bilateral fittings, the most common reasons were, intelligibility on both sides, better localization, better sound quality, and better balance.  Following completion of the study, 93% of the participants chose to purchase bilateral hearing aids, whereas 7% chose to purchase only one hearing aid.

One primary goal of the study was to determine if subjective benefit could be supported with objective test results. There was a significant positive correlation between bilateral benefit for speech perception and subjective satisfaction ratings, but other evaluated factors did not show this relationship. Therefore, the authors determined that functional test results could not distinguish between groups who preferred unilateral or bilateral fittings. Overall, however, the vast majority of participants preferred bilateral hearing aid fittings and the functional test results support a strong binaural benefit.

The second goal of the study was to evaluate potential predictive measures of binaural benefit. The results did not show strong correlations between bilateral hearing aid performance and interaural time difference, binaural masking level difference or speech reception threshold measures.  Therefore, these measures were not determined to have particular predictive value for determining binaural hearing aid success.  In fact, the strongest correlation between bilateral benefit and any other diagnostic measure was found for traditional audiometric measures of pure tone average and maximum speech recognition.

Binaural benefit was also examined with regard to other subject variables. The authors found greater binaural benefit for users with more severe hearing loss and for those with more symmetrical hearing loss. There were no significant differences between subjects who had previously been fitted with unilateral hearing aids and those who had been previously fitted bilaterally. Participants without prior hearing aid experience demonstrated slightly less binaural benefit and less satisfaction than those with previous experience. The authors point out that this finding is confounded by the fact that previous users tended to have significantly greater degrees of hearing loss than first-time users.

The bilateral benefit for localization was higher for in-the-ear hearing aid users than for behind-the-ear hearing aid users. The authors surmised that this could be related to pinna effects, but pinna effects generally aid vertical localization and front/back localization (Blauert, 1997), whereas the localization measures in the current study were strictly horizontal. Still, it is possible that preservation of pinna-related spectral cues in combination with binaural cues could have had an additive effect for the in-the-ear hearing aid users in the present study.

It is interesting to note that despite the highly variable subject population in this study, significant binaural benefit for speech intelligibility and localization was found across participants, and participants overwhelmingly preferred the use of binaural hearing aids over monaural. Variables such as microphone mode, noise reduction technology, and circuit quality were not specifically addressed or controlled. It is reasonable to surmise that performance in the one category in which subjects preferred unilateral hearing aids, comfort for loud sounds, could be improved by adjustments to noise reduction settings, MPO or gain settings, or use of adaptive directionality.  Therefore, the study as a whole offers strong support for binaural hearing aid recommendations and indicates that the only negative effect, that of loudness discomfort, could probably be easily corrected with current technology.

Participants in this study were all willing to consider binaural hearing aid use and therefore had relatively symmetrical hearing losses. The binaural benefits measured here can probably be reasonably extrapolated to individuals with asymmetrical hearing losses, but this issue might benefit from further study.  Also, it is likely that similar binaural benefits may also apply to potential hearing aid users who are unwilling or reluctant to consider binaural hearing aid use, but these clients will require more thorough counseling with regard to expectations and acclimatization.  The primary reason given for unilateral hearing aid preference was related to occlusion and the sound quality of one’s own voice. A reluctant user of new binaural hearing aids will need to understand that this is a common, but often short-lived, outcome of binaural hearing aid use.

Because of the poor predictive value of diagnostic tests for binaural hearing aid success, the authors advise that it is probably best for hearing aid users to determine binaural benefit individually, during their initial trial period. This is appropriate advice and may be in line with what most clinicians are already recommending to their patients. Because an individual’s work, home, and social activities are important determinants of their perceived hearing handicap, binaural hearing aids should always be tested thoroughly in these situations to evaluate benefit.  There is little financial risk involved, as most clinics offer at least a 30-day trial period with new instruments and many offer a 45- or 60-day trial. Should a client determine that the benefit of a second hearing aid does not outweigh the financial burden, they would be able to return the aid for a refund, losing only the cost of a custom earmold and/or a trial period fee.

The current study shows strong evidence for functional improvements as well as perceived advantages in binaural hearing aid users. However, the authors were unable to identify a diagnostic tool to effectively predict binaural success.  This raises an important question about the value of such a predictive measure.  The significant improvements enjoyed by binaural users and the overwhelming preference for two hearing aids over one suggest that binaural fittings should be the recommendation of choice for all clients with bilateral, aidable hearing loss.  Granted, there are some audiometric findings that preclude a binaural recommendation, such as profound hearing loss in one ear, normal hearing in one ear, or exceptionally poor word recognition ability in one ear. But these are obvious, well-known, and relatively uncommon clinical contraindications to binaural hearing aid use. It seems reasonable, as the authors eventually suggest, to forego predictive measures and allow clients to experience binaural benefits individually and determine the proper decision for themselves during their trial period.

References

Andersson, G., Palmkvist, A., Melin, L. (1996). Predictors of daily assessed hearing aid use and hearing capability using visual analogue scales. British Journal of Audiology 30, 27-35.

Blauert, J. (1997). Spatial Hearing: The Psychophysics of Human Sound Localization. Cambridge: MIT Press.

Boymans, M. (2003). Intelligent processing to optimize the benefits of hearing aids. Ph.D. thesis, University of Amsterdam.

Boymans, M., Goverts, S.T., Kramer, S.E., Festen, J.M. & Dreschler, W.A. (2008). A prospective multi-centre study of the benefits of bilateral hearing aids. Ear and Hearing 29(6), 930-941.

Chung, S.M. & Stephens, S.D. (1986).  Factors influencing binaural hearing aid use. British Journal of Audiology 20, 129-140.

Dreschler, W.A. & Boymans, M. (1994). Clinical evaluation on the advantage of binaural hearing aid fittings. Audiologische Akustik 5, 12-23.

Erdman, S.A.  & Sedge, R.K. (1981). Subjective comparisons of binaural versus monaural amplification. Ear and Hearing 2, 225-229.

Hawkins, D.B. & Yacullo, W.S. (1984). Signal-to-noise ratio advantage of binaural hearing aids and directional microphones under different levels of reverberation. Journal of Speech and Hearing Disorders 49, 278-186.

Kobler, S. & Rosenhall, U. (2002). Horizontal localization and speech intelligibility with bilateral and unilateral hearing aid amplification. International Journal of Audiology 41, 395-400.

Noble, W. (2006). Bilateral hearing aids: a review of self-reports of benefit in comparison with unilateral fitting. International Journal of Audiology 45, 63-71.

Punch, J.L., Jenison, R.L. & Alan, J. (1991). Evaluation of three strategies for fitting hearing aids binaurally. Ear and Hearing 12, 205-215.

Schreurs, K.K. & Olsen, W.O. (1985). Comparison of monaural and binaural hearing aid use on a trial period basis. Ear and Hearing 6, 198-202.

Walden, T.C. & Walden, B.E. (2005). Unilateral versus bilateral amplification for adults with impaired hearing. Journal of the American Academy of Audiology 16, 574-584.

Reviewing the benefits of open-fit hearing aids

Article of interest:

Unaided and Aided Performance with a Directional Open-Fit Hearing Aid

Valente, M., and Mispagel, K.M. (2008)

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors. 

With the continued popularity of directional microphone use in open-fit and receiver-in-canal (RIC) hearing aids, there has been increasing interest in evaluating their performance in noisy environments. A number of studies have investigated the performance of directional, open-fit BTEs in laboratory conditions. (Valente et al., 1995; Ricketts, 2000a; Ricketts, 2000b). Some have evaluated directional microphone performance in real-life or simulated real-life noise environments (Ching et al, 2009). In the current study, the authors compared performance in omnidirectional, directional and unaided conditions using RIC instruments in R-SpaceTM (Revitt et al, 2000) recorded restaurant noise. Their goal was to obtain more externally valid results by using real-life noise in a controlled, laboratory setting.

The R-SpaceTM method involved recordings of real restaurant noise from an 8-microphone, circular array. For the test conditions, these recordings were presented through an 8-speaker, circular array to simulate the conditions in the busy restaurant. One important factor that distinguishes this study from most others is that the subjects listened to speech stimuli in the presence of noise from all directions, including the front. At the time of this study only a few other studies had tested directional microphone performance in the presence of multiple noise sources, including frontal (Ricketts, 2000a; Ricketts, 2001; Bentler et al., 2004).

The authors recruited 26 adults with no prior hearing aid experience for the study. They were fitted with binaural receiver-in-canal (RIC) instruments. The instruments were programmed without noise reduction processing and with independent omnidirectional and directional settings. Subjects were counseled on use and care of the instruments, including proper use of omnidirectional and directional programs. They returned for follow-up adjustments one week after their fitting then used their instruments for four weeks before returning for testing. Subjects were given the opportunity to either purchase the hearing aids after the study at a 50% discount or receive a $200 payment for participation.

Hearing in Noise Test (HINT) (Nilsson et al., 1994) sentence reception thresholds were obtained to evaluate sentence perception in the uncorrelated R-Space noise. The Abbreviated Profile of Hearing Aid Benefit (APHAB) (Cox & Alexander, 1995) was also administered to evaluate perceived benefit from the instruments in the study. Four APHAB subscales were evaluated independently:

– Ease of communication (EC)
– Reverberation (RV)
– Background noise (BN)
– Aversiveness to loud sounds (AV)

The authors found that subjects’ performance in the directional condition was significantly better than both omnidirectional and unaided conditions. The omnidirectional condition was not significantly better than unaided; in fact results were slightly worse than those obtained in the unaided condition.

For the APHAB results, the authors found that on the EC, RV and BV subscales, aided scores were significantly better than unaided scores. Perhaps not surprisingly, the AV score, which evaluates “aversiveness to noise” was worse in the aided conditions. The aided results combined omnidirectional and directional conditions, so it is possible that aversion to noise in omnidirectional conditions was greater than the directional conditions. However, this was not specifically evaluated in the current study.

The authors pointed out that their directional benefit, which on average was 1.7dB, was lower than those found in other studies of open-fit or RIC hearing instruments (Ricketts, 2000b; Ricketts, 2001; Bentler, 2004; Pumford et al., 2000). However, they mention that most of those studies did not use frontal noise sources in their arrays. Frontal noise sources should have obvious detrimental effects on directional microphone performance, so it is likely that the speaker arrangement in the current study affected the measured directional improvement. At the time of this publication one other study had been conducted using the R-SpaceTM restaurant noise (Compton-Conley et al 2004). They found mean directional benefits of 3.6 to 5.8 dB, but their subjects had normal hearing and the hearing aids they used were not an open-fit design and were very different from the ones in the current study..

Clinicians can gain a number of important insights from Valente and Mispagel’s study. First and foremost, directional microphones are likely to provide significant benefits for users of RIC hearing aids. At the time of publication, the authors noted that directional improvement should be studied in order to warrant the extra expense of adding directional microphones to an open-fit hearing aid order. However, most of today’s open-fit and RIC instruments already come standard with directional microphones, many of which are automatically adjustable. So there is no need to justify the use of directional microphones on a cost basis, as they usually add nothing to the hearing aid purchase price.

This study provided more evidence for directional benefit in noise, but further work is needed to determine performance differences between directional and omnidirectional microphones in quiet conditions. Dispensing clinicians should always order instruments that have omnidirectional and directional modes, whether manually or automatically adjustable. This helps ensure that the instruments will perform optimally in most situations. Even instruments with automatically adjustable directional microphones often have push-buttons that allow us to give patients additional programs. For example, a manually accessible, directional program, perhaps with more aggressive noise reduction, offers the user another option for excessively noisy situations.

The current study obtained slightly reduced directional effects compared to other studies that tested subjects in speaker arrays without frontal noise sources. This underscores the importance of counseling patients about proper positioning when using directional settings. In general, patients should understand that they will be better off when they can put as much noise behind them as possible. But, it is also important to ensure that patients have reasonable expectations about directional microphones. They must understand that the directional microphone will help them focus on conversation in front of them, but will not completely remove competing noise behind them. Patients must also understand that omnidirectional settings are likely to offer no improvement in noise and might even be a detriment to speech perception in some noisy environments.

Subjects in Valente and Mispagel’s study were offered the opportunity to purchase their hearing instruments at a 50% discount after the study’s completion. Only 8 of the 26 subjects opted to do so. Of the remaining subjects, 3 reported that the perceived benefit was not enough to justify the purchase, whereas 15 subjects did not report any significant perceived benefit. This leads to another important point about patient counseling.

The subjects in this study, like most candidates for open-fit or RIC instruments, had normal low-frequency hearing. Therefore, they may have had less of a perceived need for hearing aids in the first place. It is important for audiologists to discuss realistic expectations and likely hearing aid benefits with patients in detail at the hearing aid selection appointment, before hearing aids are ordered. Patients who are unmotivated or do not perceive enough need for hearing assistance will ultimately be less likely to perceive significant benefit from their hearing aids. This is particularly true in everyday clinical situations, in which patients are not typically offered a 50% discount and will have to factor financial constraints into their decisions. For most open-fit or RIC candidates, their motivation and perceived handicap will be related to their lifestyle: their social activities, employment situation, hobbies, etc. Because a patient who has a less than satisfying experience with hearing aids may be reluctant to pursue them again in the future, it is critical for the clinician to help them establish realistic goals early on, before hearing aid options are discussed.

References
Bentler, R., Egge, J., Tubbs, J., Dittberner, A., and Flamme, G. (2004). Quantification of directional benefit across different polar response patterns. Journal of the American Academy of Audiology 15(9), 649-659.

Ching, T.C., O’Brien, A., Dillon, H., Chalupper, J., Hartley, L., Hartley, D., Raicevich, G., and Hain, J. (2009). Journal of Speech, Language and Hearing Research 52, 1241-1254.

Compton-Conley, C., Neuman, A., Killion, M., and Levitt, H. (2004). Performance of directional microphones for hearing aids: real world versus simulation. Journal of the American Academy of Audiology 15, 440-455.

Cox, R.M. and Alexander, G.C. (1995). The abbreviated profile of hearing-aid benefit. Ear and Hearing 16, 176-183.

Nilsson, M., Soli, S. and Sullivan, J. (1994). Development of the hearing in noise test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95, 1085-1099.

Pumford, J., Seewald, R,. Scollie, S. and Jenstad, L. (2000). Speech recognition with in-the-ear and behind-the-ear dual microphone hearing instruments. Journal of the American Academy of Audiology 11, 23-35.

Revit, L., Schulein, R., and Julstrom, S. (2002). Toward accurate assessment of real-world hearing aid benefit. Hearing Review 9, 34-38, 51.

Ricketts, T. (2000a). The impact of head angle on monaural and bilateral performance with directional and omnidirectional hearing aids. Ear and Hearing 21, 318-329.

Ricketts, T. (2000b). Impact of noise source configuration on directional hearing aid benefit and performance. Ear and Hearing 21, 194-205.

Ricketts, T., Lindley, G., and Henry, P. (2001). Impact of compression and hearing aid style on directional hearing aid benefit and performance. Ear and Hearing 22, 348-360.

Valente, M., Fabry, D., and Potts, L. (1995). Recognition of speech in noise with hearing aids using a dual microphone. Journal of the American Academy of Audiology 6, 440-449.

Valente, M., & Mispagel, K.M. (2008). Unaided and aided performance with a directional open-fit hearing aid. International Journal of Audiology, 47, 329-336.

The real-world benefits of directional microphones with infants and young children

This editorial reviews the published article:

Directional Effects on Infants and Young Children in Real Life: Implications for Amplification
Ching, O’Brien, Dillon, Chalupper, Hartley, Hartley, Raicevich and Hain, 2009

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

The beneficial effect of directional microphone use on adult speech perception in noisy environments is well known and is based on the fact that conversational speech usually takes place with participants facing each other. Reducing the level of competing sound behind the listener, even slightly, can increase the signal-to-noise ratio (SNR), resulting in improved identification and discrimination of speech sounds. Clinical audiologists are accustomed to counseling patients to maintain face-to-face contact whenever possible to get the most benefit from the directional microphones and to take advantage of visual cues as well.

The potential advantage of directional microphone use for children is less understood, partly because children may not employ face-to-face communication as regularly as adults do. Several studies have demonstrated the importance of improved SNR for speech reception in children and it is generally accepted that even children with normal hearing require a greater SNR than adults (Crandell & Smaldino, 2004; Johnstone & Litovsky, 2006). This is particularly true for hearing-impaired children, especially those of a young age. We also know that children are able to orient toward sound sources at a very young age (Ashmead, Clifton & Perrin, 1987; Muir & Field, 1979; Muir, Clifton & Clarkson, 1989), so it follows that directional microphones could potentially improve their speech reception ability in the presence of competing sounds. However, because of concerns about reduced access to non-frontal speech and environmental sounds, audiologists are often reluctant to fit young children and infants with hearing aids equipped with directional settings for fear of detrimental effects on incidental learning.

Ching, O’Brien, Dillon, Chalupper, Hartley, Hartley, Raicevich and Hain (2009) investigated head orientation and the opportunity for young children to benefit from directional hearing aid use in everyday environments. Prior research had shown benefits of directionality in laboratory conditions (Bohnert & Brantzen, 2004; Condie, Scollie, & Checkley, 2002; Kuk, Kollofski, Brown, Melum & Rosenthal, 1999), but it was unknown how directionality would affect speech reception in more typical, naturalistic situations. The goal of the study was twofold: 1) to determine the potential benefit of directionality on reception of speech in naturalistic listening situations, and 2) to examine potentially detrimental effects of directionality on non-frontal sounds.

The authors recruited eleven children with normal hearing and sixteen children with moderate hearing loss between the ages of 11 months and 6.5 years. The children were fitted with behind-the-ear, wide dynamic range hearing aids with directional microphones. None of them had prior experience with directionality in their personal hearing aids.

Video recordings of the children were obtained, in four scenarios that represent everyday situations. Diary entries from parents and caregivers were collected to identify listening situations that could account for approximately 80% of child’s weekly routine. It was hoped that the diary entries could help predict how often the children were likely to be in situations where directionality could be beneficial.

The video recordings of the children in typical listening scenarios were used to evaluate the proportion of time that they were oriented toward primary speech sources. The four scenarios were:

* The child interacting directly with a caregiver in a play situation
* The child NOT interacting directly with adults in the same room
* The child indoors with other children and adults
* The child outdoors with other children and adults

During the recordings, the researchers logged the time during which speech was “present”. Speech was deemed “present” whenever a primary talker could be identified, whether or not they were addressing the child directly.

Video analysis revealed that in the one-to-one situation, the children oriented themselves toward the talker almost 60% of the time. In the remaining group scenarios, the children oriented toward the primary talker between 30-50% of the time, even if they were not being directly addressed by the talker. They were least likely to face the talker in the second scenario, in which adults were present but the child was not engaged in play with adults or other children. Interestingly, age and the presence of hearing loss did not affect the proportion of time that the children spent facing the talker.

Examination of the caregivers’ diaries revealed that the majority of the children’s time was spent on indoor activities, particularly in group situations. Children with normal hearing were slightly more likely to participate in group activities than hearing-impaired children were. Conversely, hearing-impaired children were somewhat more likely than normal-hearing children to participate in one-to-one activities.

Overall, it was determined that directionality had a positive effect on speech reception, because:
* children oriented themselves toward the primary talker more than 50% of the time
* directionality improved SNR for speech in front of the child, especially in group situations
* diary entries showed that the children frequently participated in group activities

It was also determined that directionality is not likely to have detrimental effects on the perception of incidental speech and environmental sounds. The children still oriented themselves to primary speech sources more than 40% of the time, even when talkers were not directly addressing them. Furthermore, the authors pointed out that the changes in SNR were small, which can be enough to have a significant effect on speech reception from the front in the presence of background noise, but is less likely to be enough to affect perception on dominant sound sources from the rear. It follows, then, that directional microphone settings in hearing aids could have benefits for young pediatric hearing aid users by improving the signal-to-noise ratio and therefore the reception of speech information, especially in group situations.

The authors advised that directional hearing aid programs, partly because of inherent decreases in low-frequency gain, might not always be advisable for children, especially in quiet conditions. They recommended the use of directional settings with equalized frequency responses to adjust for the reduction in low-frequency gain and suggested that switchable instruments would be best, to allow for omnidirectional hearing in quiet conditions and directionality in the presence of noise. Because young children and infants are not capable of adjusting hearing aid settings on their own, automatically adjustable instruments were suggested, especially those that can prioritize speech from a dominant talker even from non-frontal directions. Today we have a wide variety of automatically adjustable directional instruments available at a broad range of price points. This, coupled with ongoing improvements in speech enhancement and noise reduction in hearing aid circuitry indicate that clinicians will have even better tools to help hearing-impaired children function in noisy, everyday situations.

The authors underscored the importance of thoroughly counseling caregivers on the effects of directionality in various listening environments. For instance, caregivers should pay attention to the child’s head orientation and positioning and should initiate face-to-face communication at close proximity whenever possible, particularly in noisy situations. Clinical audiologists routinely counsel patients on proper positioning and the importance of face-to-face communication to reduce the effects of background noise on speech perception. Because young, hearing-impaired children rely on better signal-to-noise ratios to receive and process speech information in their everyday activities, and because they may not always orient themselves toward primary speech sources, it is particularly important for their caregivers to understand how they can help maximize the benefit of the child’s directional microphone hearing aids.

References:
Ashmead, D.H., Clifton, R.K. & Perrin, E.E. (1987). Precision of auditory localization in human infants. Developmental Psychology, 23, 641-647.

Bohnert, A., & Brantzen, P. (2004). Experiences when fitting children with a digital directional hearing aid. Hearing Review, 11, 50-55.

Ching, T.Y.C., O’Brien, A., Dillon, H., Chalupper, J., Hartley, L., Hartley, D., Raicevich, & Hain, J. (2009). Directional effects on infants and young children in real life: implications for amplification. Journal of Speech Language and Hearing Research, 52, 1241-1254.

Condie, R.K., Scollie, S.D., & Checkley, P. (2002). Children’s performance: Analog versus digital adaptive dual-microphone instruments. Hearing Review, 9, 40-43.

Crandell, C., & Smaldino, J. J. (2004). Classroom acoustics. In R.D. Kent (Ed.), The MIT Encyclopedia of communication disorders (pp 442-444). Cambridge, MA: The MIT Press.

Johnstone, P.M. & Litovsky, R.Y. (2006). Effect of masker type and age on speech intelligibility and spatial release from masking in children and adults. The Journal of the Acoustical Society of America, 120, 2177-2189.

Kuk, F., Kollofski, C., Brown, S., Melum, A., & Rosenthal, A. (1999). Use of a digital hearing aid with directional microphones in school-aged children. Journal of the American Academy of Audiology, 10, 535-548.

Muir, D., & Field, J. (1979). Newborn infants orient to sounds. Child Development, 50, 431-436.

Muir, D., Clifton, R.K., & Clarkson, M.G. (1989). The development of a human auditory localization response: A U-shaped function. Canadian Journal of Psychology, 3, 199-216.

 

 

The effect of digital noise reduction on listening effort: an article review

This article marks the first in a monthly series for StarkeyEvidence.com.

Each month scholarly journals publish articles on a wide array of topics. Some of these valuable articles and their useful conclusions never reach professionals in the clinical arena. The aim of these entries is to discuss research findings and their implications for hearing professionals living a daily clinical routine. Some of these topics may have general clinical relevance, while other may target specific aspects of hearing aids and their application.

This first discussion revolves around an article by authors Sarampalis, Kalluri, Edwards, and Hafter entitled “Objective measures of listening effort: Effects of background noise and noise reduction”. In this 2009 study, the authors pursue the sometimes elusive benefits of digital noise reduction. A review of past literature suggests that digital noise reduction, as implemented in hearing aids, benefits patients through improved sound quality, ease of listening and a possible perceived improvement in speech understanding. Significant improvements in speech understanding are, however, not a routinely observed benefit of digital noise reduction and some studies have shown significant decreases in speech understanding with active digital noise reduction.

In a 1992 article, authors Hafter and Schlauch suggest that noise reduction may lighten a patient’s cognitive load, essentially freeing resources for other tasks. To better understand the proposed effect, imagine driving a car in an unfamiliar area. It’s common for drivers to turn their stereo down, or off, when driving in a demanding situation. This is beneficial, not because music affects driving ability, but because the additional auditory input is distracting, effectively increasing the driver’s cognitive load. By removing the distraction of the stereo, more cognitive resources are freed and the ability to focus, or pay attention to the complex task of driving is improved.

In order to better understand how digital noise reduction may affect attention and cognitive load, two experiments were completed. In the first experiment, research participants were asked to repeat the last word of sentences presented in a background of noise. After eight sentences the listener attempted to repeat as many of the target words as they could. The sentence material contained both high-context and no-context conditions, for example:

High context: A chimpanzee is an ape

No context: She might have discussed the ape

In the second experiment listeners were asked to judge if a random number between one and eight was even or odd, while at the same time listening to and repeating sentences presented in a background of noise. Both experiments incorporated a dual-task paradigm: the first asked participants to repeat select words presented in noise, while also remembering these words for later recall. The second required participants to repeat an entire sentence, presented in noise, while also completing a complex visual task.

Highlights from experiment one show:

  • performance in all conditions decreased as the signal-to-noise ratio became more difficult;
  • overall performance in the no-context conditions was lower than in the high-context conditions;
  • a comparison between performance with and without digital noise reduction showed a significant improvement in recall ability with digital noise reduction

Highlights from experiment two show:

  • performance in all conditions decreased as the signal-to-noise ratio became more difficult;
  • reaction times increased with decreased signal-to-noise ratio;
  • at -6 dB SNR, reaction times were significantly improved with digital noise reduction

The findings of this study show that the cognitive demands of non-auditory tasks, such as visual and memory tasks, inhibit the ability of a person to understand speech-in-noise. In other words, secondary tasks make speech understanding more difficult. Additionally, digital noise reduction algorithms can reduce cognitive effort under adverse listening conditions. The authors discuss the value of using cognitive measures in hearing aid research and speculate that directional microphones may provide a cognitive benefit as well.

The clinical implications of this study suggest that patients may find benefits of wearing hearing aids that go beyond improved speech audibility. Modern signal processing may provide benefits that are only now being understood. For instance, a patient may report that hearing aids have made listening easier, that their new hearing instruments seem to suppress noise more than the old ones, but routine evaluation of speech understanding may not show significant differences between the two hearing aids.

Hearing aid success and benefit has traditionally been defined with the results of speech testing, or questionnaires. If advanced technology can ease the task of listening, patients may be receiving benefits from their hearing aids that we are not currently prepared to evaluate in the clinic. Hopefully, work in this area will continue, increasing our understanding of the role that cognition plays in the success of the hearing aid wearer.

References:
Bentler, R., Wu, Y., Kettle, J., & Hurtig, R. (2008). Digital Noise Reduction: Outcomes from laboratory and field studies. International Journal of Audiology, 47:8, 447-460.

Hafter, E. R., & Schlauch, R. S. (1992). Cognitive factors and selection of auditory listening bands. In A. Dancer, D. Henderson, R. J. Salvi, & R. P. Hammernik ( Eds.), Noise-induced hearing loss (pp. 303–310). Philadelphia: B.C. Decker.

Sarampalis, A., Kalluri, S., Edwards, B., & Hafter, E. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech Language and Hearing Research, 52, 1230-1240.