Starkey Research & Clinical Blog

Can hearing aids reduce listening fatigue?

Hornsby, B.W.Y. (2013). The Effects of Hearing Aid Use on Listening Effort and Mental Fatigue Associated with Sustained Speech Processing Demands. Ear and Hearing, Published Ahead-of-Print.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

A patient recently told me that he wanted to put on his glasses so he could hear me better.  He was joking, but was correct in understanding that visual cues help facilitate speech understanding. When engaged in conversation, a listener uses many sources of information to supplement the auditory stimulus. Visual cues from lip-reading, gestures and expressions as well as situational cues, conversational context and the listener’s knowledge of grammar all help limit the possible interpretations of the message. Conditions that degrade the auditory stimulus, such as reverberation, background noise and hearing loss cause increased reliance on other cues in order for the listener to “fill in the blanks” and understand the spoken message. The use of these additional information sources amounts to an increased allocation of cognitive resources, which has also been referred to as increased “listening effort” (Downs, 1982; Hick & Tharpe, 2002; McCoy et al., 2005).

Research suggests that the increased cognitive effort required for hearing-impaired individuals to understand speech may lead to subjective reports of mental fatigue (Hetu et al., 1988; Ringdahl & Grimby, 2000; Kramer et al., 2006). This may be of particular concern to elderly people and those with cognitive, memory or other sensory deficits. The increased listening effort caused by hearing loss is associated with self-reports of stress, tension and fatigue (Copithorne 2006; Edwards 2007). In a study of factory workers, Hetu et al. (1988) reported that individuals with difficulty hearing at work needed increased attention, concentration and effort, leading to increased stress and fatigue. It is reasonable to conclude that listening effort as studied in the laboratory should be linked to subjective associations of hearing loss with mental fatigue, but the relationship is not clear. Dr. Hornsby points out that laboratory studies typically evaluate short-term changes in resource allocation as listening ease is manipulated in the experimental task. However, perceived mental fatigue is more likely to result from sustained listening demands over a longer period of time, e.g., a work day or social engagement lasting several hours (Hetu et al., 1988; Kramer et al., 2006).

The purpose of Dr. Hornsby’s study was to determine if hearing aids, with and without advanced features like directionality and noise reduction, reduce listening effort and subsequent susceptibility to mental fatigue. He also investigated the relationship between objective measures of speech discrimination and listening effort in the laboratory with subjective self-reports of mental fatigue.

Sixteen adult subjects participated in the study. All had bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Twelve subjects were employed full-time and reported being communicatively active about 65% of the time during the day. The remaining subjects were not employed but reported being communicatively active about 61% of the day. Twelve subjects were bilateral hearing aid users and four subjects were non-users. Subjects were screened to rule out cognitive dysfunction. All participants were fitted with bilateral behind-the-ear hearing aids with slim tubes and dome ear tips.  Hearing aids were programmed in basic and advanced modes. In basic mode, the microphones were omnidirectional and all advanced features except feedback suppression were turned off. In advanced mode, the hearing aids were set to manufacturer’s defaults with automatically adaptive directionality, noise reduction, reverberation reduction and wind noise reduction. All subjects wore the study hearing aids for at least 1-2 weeks before the experimental sessions began.

For the objective measurements of listening effort, subjects completed a word recognition in noise task paired with an auditory word recall task and a measure of visual reaction time.  Subjects heard random sets of 8 to 12 monosyllabic words preceded by the carrier phrase, “Say the word…” They were asked to repeat the words aloud and the percentage of correct responses was scored. In addition, subjects were asked to remember the last 5 words of each list. The end of the list was indicated by the word “STOP” on a screen in front of the speaker. Subjects were instructed to press a button as quickly as possible when the visual prompt appeared. Because the lists varied from 8 to 12 items, subjects never knew when to expect the visual prompt.  To control for variability in motor function, visual reaction time was measured alone in a separate session, during which subjects were instructed to simply ignore the speech and noise.

Subjective ratings of listening effort and fatigue were obtained with a five-item scale, administered prior to the experimental sessions. Three questions were adapted from the Speech Spatial and Qualities of Hearing Questionnaire (SSQ: Gatehouse & Noble, 2004) and the remaining items were formulated specifically for the study. Questions were phrased to elicit responses related to that particular day (“Did you have to put in a lot of effort to hear what was being said in conversation today?”, “How mentally/physically drained are you right now?”).  The final two questions were administered before and after the dual-task session and measured changes in attention and fatigue due to participation in the experimental tasks.

The word recognition in noise test yielded significantly better results in both aided conditions than in the unaided condition, though there was no difference between the basic and advanced aided conditions. The differences between unaided and aided scores varied considerably, suggesting that listening effort for individual subjects varied across conditions.  Unaided word recall was significantly poorer than basic or advanced aided performance. There was a small, significant difference between the two aided conditions, with advanced settings outperforming basic settings. In follow-up planned comparison tests, the aided vs. unaided difference was maintained though there was not a significant difference between the two aided conditions.

The reaction time measurement also assessed listening effort or the cognitive resources required for the word recognition test.  Reaction times were analyzed according to listening condition as well as block, which compared the first three trials (initial block) to the last three trials (final block).  Increases in reaction time by block represented the effect of task-related fatigue.  Analysis by listening condition showed that unaided reaction times increased more than reaction times for the advanced aided condition but not the basic aided condition. In other words, subjects required more time to react to the visual stimulus in the unaided condition than they did in the advanced aided condition. There was no significant difference between the two aided conditions.  There was a significant main effect for block; reaction times increased over the duration of the task. There was no interaction between listening condition and block; changes in performance over time were consistent across unaided and aided conditions.

One purpose of the study was to investigate the effect of hearing aid use on mental fatigue. Interestingly, comparison of initial and final blocks indicated that word recognition scores increased about 1-2% over time but improvement over time did not vary across listening conditions. There was no decrease in performance on word recall over time, nor did changes in performance over time vary significantly across listening conditions.  But reaction time did increase over time for all conditions, indicating a shift in cognitive resources away from the reaction time task and toward the primary word recognition task. Though the effect of hearing aid use was not significant, a trend appeared suggesting that fewer aided listeners had increased reaction.

The questionnaires administered before the session probed perceived effort and fatigue throughout the day, whereas the questions administered before and after the task probed focus, attention and mental fatigue before and after the test session. In all listening conditions there was a significant increase in mental fatigue and difficulty maintaining attention after completion of the test session. A non-significant trend suggested some difference between unaided and aided conditions.

To identify other factors that may have contributed to variability, correlations for age, pure tone average, high frequency pure tone average, unaided word recognition score, SNR during testing, employment status and self-rated percentage of daily communicative activity were calculated with the subjective and objective measurements. None of the correlations were significant, indicating that none of these factors contributed substantially to the variability observed in the study.

Cognitive resource allocation is often studied with dual-task paradigms like the one used in this study. Decrements in performance on the secondary task indicate a shift in cognitive resources to the primary task. Presumably, factors that increase difficulty in the primary task will increase allocation of resources to the primary task.  In these experiments, the primary task was a word recognition test and the secondary tasks were word recall and reaction time measurements. Improved word recall and quicker reaction times in aided conditions indicate that the use of hearing aids made the primary word recognition task easier, allowing listeners to allocate more cognitive resources to the secondary tasks. Furthermore, reaction times increased less over time in aided conditions than in unaided conditions.  These findings specifically suggest that decreased listening effort with hearing aid use may have made listeners less susceptible to fatigue as the dual-task session progressed.

Though subjective reports in this study showed a general trend toward reduced listening effort and concentration in aided conditions, there was not a significant improvement with hearing aid use. This contrasts with previous work that has shown reductions in subjective listening effort with the use of hearing aids (Humes et al., 1999; Hallgren et al., 2005; Noble & Gatehouse, 2006). The author notes that auditory demands vary widely and that participants were asked to rate their effort and fatigue based on “today”, which didn’t assess perceptions of sustained listening effort over a longer period of time may not have detected subtle differences among subjects.  For instance, working in a quiet office environment may not highlight the benefit of hearing aids or the difference between an omnidirectional or directional microphone program, simply because the acoustic environment did not trigger the advanced features often enough. In contrast, working in a school or restaurant might show a more noticeable difference between unaided listening, basic amplification and advanced signal processing. Though subjects reported being communicatively active about the same proportion of the day, this inquiry didn’t account for sustained listening effort over long periods of time, or varying work and social environments. These differences would likely affect overall listening effort and fatigue, as well as the value of advanced hearing aid features.

Clinical observations support the notion that hearing aid use can reduce listening effort and fatigue.  Prior to hearing aid use, hearing-impaired patients often report feeling exhausted from trying to keep up with social interactions or workplace demands. After receiving hearing aids, patients commonly report being more engaged, more able to participate in conversation and less drained at the end of the day. Though previous reports have supported the value of amplification on reduced listening effort, Hornsby’s study is the first to provide experimental data for the potential ability of hearing aid use to reduce mental fatigue.

These findings have important implications for all hearing aid users, but may have particular importance for working individuals with hearing loss as well as elderly hearing impaired individuals.  It is important for any working person to maintain a high level of job performance and to establish their value at work. Individuals with hearing loss face additional challenges in this regard and often take pains to prove that their hearing loss is not adversely affecting their work.  Studies in workplace productivity underscore the importance of reducing distractions for maintaining focus, reducing stress and persisting at difficult tasks (Clements-Croome, 2000; Hua et al., 2011). Studies indicating that hearing aids reduce listening effort and fatigue, presumably by improving audibility and reducing the potential distraction of competing sounds, should provide additional encouragement for employed hearing-impaired individuals to pursue hearing aids.

 

References

Baldwin, C.L. & Ash, I.K. (2011). Impact of sensory acuity on auditory working memory span in young and older adults. Psychology of Aging 26, 85-91.

Bentler, R.A., Wu, Y., Kettel, J. (2008). Digital noise reduction: outcomes from laboratory and field studies. International Journal of Audiology 47, 447-460.

Clements-Croome, D. (2000). Creating the productive workplace. Publisher: London, E & FN Spon.

Copithorne, D. (2006). The fatigue factor: How I learned to love power naps, meditation and other tricks to cope with hearing-loss exhaustion. [Healthy Hearing Website, August 21, 2006].

Downs, M. (1982). Effects of hearing aid use on speech discrimination and listening effort. Journal of Speech and Hearing Disorders 47, 189-193.

Edwards, B. (2007). The future of hearing aid technology. Trends in Amplification 11, 31-45.

Gatehouse, S. & Noble, W. (2004). The speech, Spatial and Qualities of Hearing Scale (SSQ). International Journal of Audiology 43, 85-99.

Hallgren, M., Larsby, B. & Lyxell, B. (2005). Speech understanding in quiet and noise, with and without hearing aids. International Journal of Audiology 44, 574-583.

Hetu, R., Riverin, L. & Lalande, N. (1988). Qualitative analysis of the handicap associated with occupational hearing loss. British Journal of Audiology 22, 251-264.

Hick, C.B. & Tharpe, A.M. (2002). Listening effort and fatigue in school-age children with and without hearing loss. Journal of Speech, Language and Hearing Research 45, 573-584.

Hua, Y., Loftness, V., Heerwagen, J. & Powell, K. (2011). Relationship between workplace spatial settings and occupant-perceived support for collaboration. Environment and Behavior 43, 807-826.

Humes, L.E., Christensen, L. & Thomas, T. (1999). A comparison of the aided performance and benefit provided by a linear and a two-channel wide dynamic range compression hearing aid. Journal of Speech, Language and Hearing Research 42, 65-79.

Kramer, S.E., Kapteyn, T.S. & Houtgast, T. (2006). Occupational performance: comparing normal-hearing and hearing-impaired employees using the Amsterdam Checklist for Hearing and Work. International Journal of Audiology 45, 503-512.

McCoy, S.L., Tun, P.A. & Cox, L.C. (2005). Hearing loss and perceptual effort: Downstream effects on older adults’ memory for speech. Quarterly Journal of Experimental Psychology A 58, 22-33.

Noble, W. & Gatehouse, S. (2006). Effects of bilateral versus unilateral hearing aid fitting on abilities measured by the SSQ. International Journal of Audiology 45, 172-181.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W. (2011). Visual cues and listening effort: Individual variability. Journal of Speech, Language and Hearing Research 54, 1416-1430.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W. (2013). The effect of individual variability on listening effort in unaided and aided conditions. Ear and Hearing (in press).

Ringdahl, A. & Grimby, A. (2000). Severe-profound hearing impairment and health related quality of life among post-lingual deafened Swedish adults. Scandinavian Audiology 29, 266-275

Sarampalis,  A., Kalluri, S. & Edwards, B. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech, Language and Hearing Research 52, 1230-1240.

Valente, M. & Mispagel, K. (2008) Unaided and aided performance with a directional open-fit hearing aid. International Journal of Audiology 47(6), 329-336.

Can Aided Audibility Predict Pediatric Lexical Development?

Stiles, D.J., Bentler, R.A., & McGregor, K.K. (2012). The speech intelligibility index and the pure-tone average as predictors of lexical ability in children fit with hearing aids. Journal of Speech Language and Hearing Research, 55, 764-778.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Despite advances in early hearing loss identification, hearing aid technology, and fitting and verification tools, children with hearing loss consistently demonstrate limited lexical abilities compared to children with normal hearing.  These limitations have been illustrated by poorer performance on tests of vocabulary (Davis et al., 1986), word learning (Gilbertson & Kamhi, 1995; Stelmachowicz et al., 2004), phonological discrimination, and non-word repetition (Briscoe et al., 2001; Delage & Tuller, 2007; Norbury, et al., 2001).

There are a number of variables that may predict hearing-impaired children’s performance on speech and language tasks, including the age at which they were first fitted with hearing aids and the degree of hearing loss.  Moeller (2000) found that children who received earlier aural rehabilitation intervention demonstrated significantly larger receptive vocabularies than those who received later intervention.  Degree of hearing loss, which is typically defined in studies by the pure-tone average (PTA) or the average of pure-tone hearing thresholds at 500Hz, 1000Hz, and 2000Hz (Fletcher, 1929), has been significantly correlated with speech recognition (Davis et al., 1986; Gilbertson & Kamhi, 1995), receptive vocabulary (Fitzpatrick et al., 2007; Wake et al., 2005), expressive grammar, and word recognition (Delage & Tuller, 2007) in some studies comparing hearing-impaired children to those with normal hearing.

In contrast, other studies have reported that pure-tone average (PTA) did not predict language ability in hearing-impaired children.  Davis et al. (1986) tested hearing-impaired subjects between five and18 years of age and found no significant relationship between PTA and vocabulary, verbal ability, reasoning, and reading.  However, all subjects scored below average on these measures, regardless of their degree of hearing loss.  Similarly, Moeller (2000) found that age of intervention affected vocabulary and verbal reasoning, but PTA did not.  Gilbertson and Kamhi (1995) studied novel word learning in hearing-impaired children ranging in age from  seven to 10 years and found that neither PTA nor unaided speech recognition threshold was correlated to receptive vocabulary level or word learning.

At a glance, it seems likely that degree of hearing loss should affect language development and ability, because hearing loss affects audibility, and speech must be audible in order to be processed and learned.  However, the typical PTA of thresholds at 500Hz, 1000Hz, and 2000Hz does not take into account high-frequency speech information beyond 2000Hz.  Some studies using averages of high-frequency pure-tone thresholds (HFPTA) have found a significant relationship between degree of loss and speech recognition (Amos & Humes, 2007; Glista et al., 2009).

Because most hearing-impaired children now benefit from early identification and intervention, their pure-tone hearing threshold averages (PTA or HFTPA) might not be the best predictors of speech and language abilities in every-day situations.  Rather, a measure that combines degree of hearing loss as well as hearing aid characteristics might be a better predictor of speech and language ability in hearing-impaired children.  The Speech Intelligibility Index (SII; ANSI,2007), a measure of audibility that computes  the importance of different frequency regions based on the phonemic content of a given speech test, has proven to be predictive of performance on speech perception tasks for adults and children (Dubno et al., 1989; Pavlovic et al., 1986; Stelmachowicz et al., 2000).  Hearing aid gain characteristics can be incorporated into the SII algorithm to yield an aided SII, which has been reported to predict performance on word repetition (Magnusson et al., 2001) and nonsense syllable repetition ability in adults (Souza & Turner, 1999).  Because an aided SII includes the individual’s hearing loss and hearing aid characteristics into the calculations, it better represents how audibility affects an individual’s daily functioning.

The purpose of the current study was to evaluate the aided SII as a predictor of performance on measures of word recognition, phonological working memory, receptive vocabulary, and word learning.  Because development in these areas establishes a base for later achievements in language learning and reading (Tomasello, 2000; Stanovich, 1986), it is important to determine how audibility affects lexical development in hearing-impaired children.  Though the SII is usually calculated based on the particular speech test to be studied, the current investigation used aided SII values based on average speech spectra.  The authors explained that vocabulary acquisition is a cumulative process, and they intended to use the aided SII as a measure of cumulative, rather than test-specific, audibility.

Sixteen hearing-impaired children with hearing aids (CHA) and 24 children with normal hearing (CNH) between six and nine years of age participated in the study.  All of the hearing-impaired children had bilateral hearing loss and had used amplification for at least one year.  All participants used spoken language as their primary form of communication.  Real-ear measurements were used to calculate the aided SII at user settings.  Because the goal was to evaluate the children’s actual audibility as opposed to optimal audibility, their current user settings were used in the experiment whether or not they met DSL prescriptive targets (Scollie et al., 2005).

Subjects participated in tasks designed to assess four lexical domains.  Word recognition was measured by the Lexical Neighborhood Test and Multisyllabic Lexical Neighborhood Test (LNT and MLNT; Kirk & Pisoni, 2000).  These tests each contain “easy” and “hard” lists, based on how frequently the words occur in English and how many lexical neighbors they have.  Children with normal lexical development are expected to show a gradient in performance with the best scores on the easy MLNT and poorest scores on the hard LNT.  Non-word repetition was measured by a task prepared specifically for this study, using non-words selected based on adult ratings of “wordlikeness”.  In the word recognition and non-word repetition tasks, children were simply asked to repeat the words that they heard.  Responses were scored according to the number of phonemes correct for both tasks.  Additionally, the LNT and MLNT tests were scored based on number of words correct.  Receptive vocabulary was measured by the Peabody Picture Vocabulary Test (PPVT-III; Dunn & Dunn, 1997) in which the children were asked to view four images and select the one that corresponds to the presented word.  Raw scores are determined as the number of items correctly identified and norms are applied based on the subject’s age.  Novel word learning was assessed using the same stimuli from the non-word repetition task, after the children were given sentence context and visual imagery to teach them the “meaning” of the novel words.  Their ability to learn the novel words was evaluated in two ways: a production task in which they were asked to say the word when prompted by a corresponding picture and an identification task in which they were presented with an array of four items and were asked to select the item that corresponded to the word that was presented.

On the word recognition tests, the children with hearing aids (CHA) demonstrated poorer performance than the children with normal hearing (CNH) for measures of word and phoneme accuracy, though both groups demonstrated the expected gradient, with performance improving in parallel fashion from the hard LNT test through the easy MLNT test.  There was a correlation between aided SII and word recognition scores, but PTA and aided SII were equally good at predicting performance.

On the non-word repetition task, which requires auditory perception, phonological analysis, and phonological storage (Gathercole, 2006), CHA again demonstrated significantly poorer performance than CNH, and CNH performance was near ceiling levels.  PTA and aided SII scores were correlated with non-word repetition scores.  Beyond the effect of PTA, it was determined that aided SII accounted for 20% of the variance on the non-word repetition task, which was statistically significant.

The receptive vocabulary test yielded similar results; CHA performed significantly worse than CNH and both PTA and aided SII accounted for a significant proportion of the variance.

The only variable that predicted performance on the word learning tasks was age, which only yielded a significant effect on the word production task.  On the word identification task, both the CHA and CNH groups scored only slightly better than chance and there were no significant effects of group or age.

As was expected in this study, children with hearing aids (CHA) consistently showed poorer performance than children with normal hearing (CNH), with the exception of the novel word learning task.  The pattern of results suggests that aided audibility, as measured by the aided SII, was better at predicting performance than degree of hearing loss as measured by PTA.  Greater aided SII scores were consistently associated with more accurate word recognition, more accurate non-word repetition, and larger receptive vocabulary.

Although PTA or HFPTA may represent the degree of unaided hearing loss, because the aided SII score accounts for the contribution of the individual’s hearing aids, it is likely a better representation of speech audibility and auditory perception in everyday situations.  The authors point out that depending on the audiometric configuration and hearing aid characteristics, two individuals with the same PTA could have different aided SIIs, and therefore different auditory experiences.

The results of this study underscore the importance of audibility for lexical development, which in turn has significant implications for further development of language, reading, and academic skills.  Therefore, the early provision of audibility via appropriate and verifiable amplification appears to be an important step in the development of speech and language.  The SII, which is already incorporated into some real-ear systems or is available in a standalone software package, is a verification tool that should be considered a standard part of the fitting protocol for pediatric hearing aid patients.

 

References

American National Standards Institute (2007). Methods for calculation of the Speech Intelligibility index (ANSI S3.5-1997[R2007]). New York, NY: Author.

Amos, N.E. & Humes, L.E. (2007). Contribution of high frequencies to speech recognition in quiet and noise in listeners with varying degrees of high-frequency sensorineural hearing loss. Journal of Speech, Language and Hearing Research 50, 819-834.

Briscoe, J., Bishop, D.V. & Norbury, C.F. (2001). Phonological processing, language and literacy: a comparison of children with mild-to-moderate sensorineural hearing loss and those with specific language impairment. Journal of Child Psychology and Psychiatry 42, 329-340.

Davis, J.M., Elfenbein, J., Schum, R. & Bentler, R.A. (1986). Effects of mild and moderate hearing impairments on language, educational and psychosocial behavior of children. Journal of Speech and Hearing Disorders 51, 53-62.

Delage, H. & Tuller, L. (2007). Language development and mild-to-moderate hearing loss: Does language normalize with age? Journal of Speech, Language and Hearing Research 50, 1300-1313.

Dubno, J.R., Dirks, D.D. & Schaefer, A.B. (1989). Stop-consonant recognition for normal hearing listeners and listeners with high-frequency hearing loss. II: Articulation index predictions. The Journal of the Acoustical Society of America 85, 355-364.

Dunn, L.M. & Dunn, D.M. (1997). Peabody Picture Vocabulary Test – III. Circle Pines, MN: AGS.

Fitzpatrick, E., Durieux-Smith, A., Eriks-Brophy, A., Olds., J. & Gaines, R. (2007). The impact of newborn hearing screening on communications development. Journal of Medical Screening 14, 123-131.

Fletcher, H. (1929). Speech and hearing in communication. Princeton, NJ: Van Nostrand Reinhold.

Gilbertson, M. & Kamhi, A.G. (1995). Novel word learning in children with hearing impairment. Journal of Speech and Hearing Research 38, 630-642.

Glista, D., Scollie, S., Bagatto, M., Seewald, R., Parsa, V. & Johnson, A. (2009). Evaluation of nonlinear frequency compression: Clinical outcomes. International Journal of Audiology 48, 632-644.

Kirk, K.I. & Pisoni, D.B. (2000). Lexical Neighborhood Tests. St. Louis, MO:AudiTEC.

Magnusson, L., Karlsson, M. & Leijon, A. (2001). Predicted and measured speech recognition performance in noise with linear amplification. Ear and Hearing 22, 46-57.

Moeller, M.P. (2000). Early intervention and language development in children who are deaf and hard of hearing. Pediatrics 106, e43.

Norbury, C.F., Bishop, D.V. & Briscoe, J. (2001). Production of English finite verb morphology: A comparison of SLI and mild-moderate hearing impairment. Journal of Speech, Language and Hearing Research 44, 165-178.

Pavlovic, C.V., Studebaker, G.A. & Sherbecoe, R.L. (1986). An articulation index based procedure for predicting the speech recognition performance of hearing-impaired individuals. The Journal of the Acoustical Society of America 80, 50-57.

Scollie, S.D., Seewald, R., Cornelisse, L., Moodie, S., Bagatto, M., Laurnagary, D. & Pumford, J. (2005). The desired sensation level multistage input/output algorithm. Trends in Amplification 9(4), 159-197.

Souza, P.E. & Turner, C.W. (1999). Quantifying the contribution of audibility to recognition of compression-amplified speech. Ear and Hearing 20, 12-20.

Stanovich, K.E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly 21, 360-407.

Stelmachowicz, P.G., Hoover, B.M., Lewis, D.E., Kortekaas, R.W. & Pittman, A.L. (2000). The relation between stimulus context, speech audibility and perception for normal hearing and hearing-impaired children. Journal of Speech, Language and Hearing Research 43, 902-914.

Stelmachowicz, P.G., Pittman, A.L., Hoover, B.M. & Lewis, D.E. (2004 ). Novel word learning in children with normal hearing and hearing loss. Ear and Hearing 25, 47-56.

Tomasello, M. (2000). The item-based nature of children’s early syntactic development. Trends in Cognitive Sciences 4, 156-163.

Wake, M., Poulakis, Z., Hughes, E.K., Carey-Sargeant, C. & Rickards, F.W. (2005). Hearing impairment: A population study of age at diagnosis, severity and language outcomes at 7-8 years. Archives of Disease in Childhood 90, 238-244.

 

The Top 5 Audiology Research Articles from 2012

2012 was an impressive year for scientific publication in audiology research and hearing aids. Narrowing the selection to 15 or 20 articles was far easier than selecting 5 top contenders. After some thought and discussion, here is our selection of the top 5 articles published in 2012.


1. Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss

Cox, R.M., Johnson, J.A., & Alexander, G.C. (2012). Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss. Ear and Hearing, 33, 573-587.

This article is the second in a series that investigated relationships between cochlear dead regions and benefits received from hearing aids. A sample of patients, diagnosed with high-frequency cochlear dead regions, demonstrated superior outcomes when prescribed hearing aids with a broadband response; as compared to a response that limited audibility at 1,000 Hz. These findings clearly illustrate that patients with cochlear dead regions benefit from—and prefer—amplification at frequencies similar to those with diagnosed cochlear dead regions.

http://journals.lww.com/ear-hearing/Abstract/2012/09000/Implications_of_High_Frequency_Cochlear_Dead.2.aspx

2. The speech intelligibility index and the pure-tone average as predictors of lexical ability in children fit with hearing aids

Stiles, D.J., Bentler, R.A., & McGregor, K.K. (2012). The speech intelligibility index and the pure-tone average as predictors of lexical ability in children fit with hearing aids. Journal of Speech Language and Hearing Research, 55, 764-778.

The pure-tone threshold is the most commonly referenced diagnostic information when counseling families of children with hearing loss. This study compared the predictive value of pure-tone thresholds and the aided speech intelligibility index for a group of children with hearing loss. The aided speech intelligibility index proved to be a stronger predictor of word recognition, word repetition, and vocabulary. These observations suggest that a measure of aided speech intelligibility index is useful tool in hearing aid fitting and family counseling.

http://jslhr.asha.org/cgi/content/abstract/55/3/764

3. NAL-NL2 Empirical Adjustments

Keidser, G., Dillon, H., Carter, L., & O’Brien, A. (2012). NAL-NL2 Empirical Adjustments. Trends in Amplification, 16(4), 211-223.

The NAL-NL2 relies on several psychoacoustic models to derive target gains for a given hearing loss. Yet, it is well understood that these models are limiting and do not account for many individual factors. The inclusion of empirical adjustments to the NAL-NL2 highlights several factors that should be considered for prescribing gain to hearing aid users.

http://tia.sagepub.com/content/16/4/211.abstract

4. Initial-fit approach versus verified prescription: Comparing self-perceived hearing aid benefit

Abrams, H.B., Chisolm, T.H., McManus, M., & McArdle, R. (2012). Initial-fit approach versus verified prescription: Comparing self-perceived hearing aid benefit. Journal of the American Academy of Audiology, 23(10), 768-778.

While the outcomes of this study were not surprising, similar data had not been published in the refereed literature. The authors show that patients fit to a prescriptive target (i.e. NAL-NL1) report significantly better outcomes than patients fit to the lower gain targets that are offered in fitting softwares as ‘first-fit’ prescriptions. This study is a testimonial to the importance of counseling patients regarding audibility and the necessity of real-ear measurement to ensure audibility.

http://aaa.publisher.ingentaconnect.com/content/aaa/jaaa/2012/00000023/00000010/art00003

5. Conducting qualitative research in audiology: A tutorial

Knudsen, L.V., Laplante-Levesque, A., Jones, L., Preminger, J.E., Nielsen, C., Lunner, T., Hickson, L., Naylor, G., & Kramer, S.E. (2012). Conducting qualitative research in audiology: A tutorial. International Journal of Audiology, 51, 83-92.

A substantive majority of the audiologic research literature reports on quantitative data, discussing group outcomes and average trends. The challenges faced in capturing individual differences and clearly documenting field experiences require a different approach to data collection and analysis. Qualitative analysis leverages data from transcribed interviews or subjective reports to probe these anecdotal reports. This tutorial paper described methods for qualitative analysis and cites existing studies that have used these analyses.

http://informahealthcare.com.ezproxylocal.library.nova.edu/doi/abs/10.3109/14992027.2011.606283

The Speech, Spatial and Qualities of Hearing Scale (SSQ): A Gatehouse Legacy

Gatehouse, S. & Noble, W. (2004). The speech, spatial and qualities of hearing scale (SSQ). International Journal of Audiology, 43, 85-99.

This editorial discusses the clinical implications of an independent research study. This editorial does not represent the opinions of the original authors.

Self-assessment scales provide insight into everyday experiences and perceptions of hearing impaired individuals making them valuable companions to laboratory research and helpful tools for clinicians.  A variety of self-assessment indices are available for use with aided or unaided individuals and target a variety of issues including hearing aid usage patterns, binaural or monaural preference, volume and program preferences, ability to understand speech in quiet and noise, and ability to function in social situations.  Gatehouse and Noble point out that most laboratory research and self-assessment scales view speech perception as the primary issue related to hearing handicap, with improved audibility and suppression of competing noise being the primary goal of auditory rehabilitation. But in everyday life, understanding speech may constitute only part of a hearing-impaired individual’s perceived difficulties.  For instance, it is important to locate and identify audible events in order to be fully aware of the environment and safely navigate a variety of surroundings. With the development of the SSQ, Gatehouse and Noble hoped to provide a more comprehensive measure of hearing disability, taking into account the perception of both spatial relationships and sound quality. Furthermore, they investigated the relationship between disabilities in these areas to perceived hearing handicap.

Research regarding auditory scene analysis by Bregman (1990) indicates that a listener in a group situation must first parse the complex acoustic environment into sound sources or “streams” so that they can be attended to and monitored individually. In other words, in a noisy situation, a listener must be able to group together the acoustic elements that make up one particular voice before processing the content of the message. Although it would be easier if it did, conversation rarely proceeds in an orderly fashion with one participant speaking at a time. Rather, in groups, one participant might initiate a response while the previous person is still speaking, or two individuals might speak at the same time. This requires a listener to not only attend to specific speech streams, but to monitor other speech sources in order to be ready to switch attention when necessary.  Accomplishing this task involves binaural hearing, localization, attention, cognition, and vision, and successful communication in groups can be affected by all of these variables. Because the SSQ investigates auditory perceptions of movement, location, and distance as well as sound quality perceptions, such as mood and voice identification in addition to issues related to speech communication, it may more realistically address how hearing loss affects an individual’s everyday life.

The goal of Gatehouse and Noble’s study was twofold: to use the SSQ to examine what is disabling about hearing impairment and to determine how those disabilities affect hearing handicap. There were 153 participants in the study: 80 females and 73 males, with an average age of 71 years. The better-ear average (BEA) for octave frequencies from 500 to 4000Hz was 38.8dB.  The worse-ear average (WEA) was 52.7dB. In addition to the SSQ, subjects completed a 12-question hearing handicap scale developed in part from the Hearing Disabilities and Handicaps Scale (Hetu et al., 1994) and from an unpublished general health scale (Glasgow Health Status Inventory). The items were scored using a 5-point scale, yielding a global handicap score. A higher score indicated greater handicap. Negative scores on the SSQ indicate greater disability, so negative correlations between the SSQ and handicap scores were expected.

The SSQ was designed to be administered as an interview rather than as a self-administered scale. The interview format ensures that the subject understands the questions and can request clarification when necessary. The scale is divided into three domains: 14 items on speech hearing, 17 items on spatial hearing, and 18 items on “other” functions and qualities of hearing. The “other” qualities section contains items related to recognition and segregation of sounds, clarity, naturalness, and listening effort.  Items are scored with ratings of 1 to 10, with the most positive response always represented with a higher number, on the right side of the response sheet. For example, the left side of the scale represents a complete absence of a quality, complete inability to perform a task, or complete effort required. The right side of the scale indicates complete presence of a quality, complete ability, or complete absence of effort.  The left to right, negative-to-positive scoring of items was consistently maintained throughout the scale in an effort to minimize confusion.

They found that degree of hearing impairment correlated well with disability as measured by the SSQ and that the SSQ scores in turn correlated well with handicap, but that impairment itself did not correlate well with handicap. This result was expected and was in agreement with previous research. Asymmetry of hearing loss was not correlated significantly with items in the speech-hearing domain, but did correlate strongly with spatial-hearing and “other” quality domain items such as ease of listening, clarity, and sound segregation.

Examination of SSQ scores within the speech hearing domain showed that the highest ratings were given for one-to-one conversation in quiet. The lowest ratings were for group conversations and contexts in which attention must be divided among two or more sound sources simultaneously.  In the spatial hearing domain, respondents generally rated their directional hearing ability higher than the ability to judge distance or movement. For the “other” qualities domain, items related to naturalness of one’s own voice, recognition of the mood of others from their voices, and recognition of music had the highest scores and those related to ease of listening had the lowest scores.

Following examination of the SSQ scores themselves, the individual items within each of the three SSQ domains were ranked according to the strength of their correlation with hearing handicap. Within the speech-hearing domain, hearing handicap was most influenced by disability in contexts requiring divided or rapidly shifting attention: conversation in a group of people, following two conversations at once, and missing the start of what the next speaker says.  However, handicap was also influenced by difficulty talking to one person in quiet conditions. It is not surprising that a person who perceives difficulty understanding speech in relatively favorable conditions would experience greater concern about their overall communication ability.  Difficulty understanding conversation in noisy situations can be externalized or blamed on the environmental conditions, whereas difficulty in quiet is likely to be internalized and attributed to one’s own disability.

Interestingly, many items in the spatial-hearing domain were as highly correlated with handicap as those within the speech-hearing domain. Questions related to determining the distance and paths of movement, the distance of vehicles, the direction of a barking dog, and locating a speaker in a group setting all contributed to perceived handicap. This underscores the importance of spatial hearing for environmental awareness as well as successful participation in conversation and suggests that examination of spatial hearing may help clinicians and researchers better understand an individual’s experience with their hearing loss.

Several of the items in the “other” qualities section of the SSQ were strongly correlated with handicap. The ability to identify a speaker, to judge a speaker’s mood, the clarity and naturalness of sound, and the effort needed to engage in conversation were among the items most strongly related to hearing handicap.  The authors explain that these abilities affect an individual’s sense of social competence. Failure to accurately interpret the identity or mood of a speaker or the need for increased effort to participate in conversation may have an isolating effect, causing an individual to avoid social situations or even telephone conversations because they fear they will be unable to participate fully or successfully.

Not surprisingly, Gatehouse and Noble found that hearing thresholds were related to SSQ disability scores and SSQ scores were related to handicap, but impairment itself was not strongly correlated to handicap. This finding was expected and is in agreement with previous reports (Hetu et al., 1994). The relationship between impairment, disability, and handicap is important and is familiar to audiologists, in that we routinely discuss how a patient’s hearing loss affects his or her activities and everyday lifestyle. Though consideration of the audiogram is of course important, the way hearing loss interacts with work-related and social activities – things a person must do or enjoys doing  - more likely determines their perceived handicap and therefore their motivation to pursue auditory rehabilitation.

The finding that spatial hearing disability was strongly correlated with handicap may have implications for asymmetric hearing loss as well as the fitting of bilateral hearing aids. Individuals with asymmetric hearing thresholds will have more difficulty localizing sound and therefore may experience more of a handicap related to the discrimination of auditory spatial relationships and movement.  For instance, an individual with asymmetric hearing loss might hear conversation easily but might experience stress because they are unable to judge the location or approach of a car that is not visible. Because individuals with a better or normal ear might rate their speech discrimination performance relatively well in quiet and even moderately noisy places, an assessment scale that examines only speech-related hearing disability might underestimate their perceived hearing handicap. Consideration of spatial hearing deficits might therefore provide a more realistic and helpful assessment of an individual’s functional difficulties. Perception of auditory spatial relationships is likely to be improved with the use of bilateral hearing aids for individuals with binaural hearing loss, so the correlation between spatial hearing items on the SSQ to hearing handicap may also be viewed as further support for the recommendation of two hearing aids.

The correlations across the three domains of the SSQ to the scores on the handicap scale indicate that the SSQ is effectively addressing several variables that contribute to perceived hearing handicap. The impact of speech recognition and discrimination on perceived handicap is well established. The impact of other skills such as determining distance, movement, voice quality, and mood is less well understood but may be an equally important component in understanding an individual’s feelings of social competence and confidence as well as their sense of safety and orientation in a variety of environments. The SSQ provides clinicians and researchers with an additional tool to more fully understand the impact of hearing loss on everyday lives.

References

Bregman, A. (1990). Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, MA: MIT Press.

Gatehouse, S. & Noble, W. (2004). The speech, spatial and qualities of hearing scale (SSQ). International Journal of Audiology 43, 85-99.

Hetu, R., Getty, L., Phlibert, L., Desilets, F., Noble, W. (1994). Development of a clinical tool for the measurement of the severity of hearing disabilities and handicaps. Journal of Speech-Language Pathology and Audiology 18, 83-95.

Summarizing the Benefits of Bilateral Hearing Aids

Mencher, G.T. & Davis, A. (2006). Bilateral of unilateral amplification: is there a difference? A brief tutorial. International Journal of Audiology 45 (S1), S3-11.

This editorial discusses the clinical implications of an independent research study. This editorial does not represent the opinions of the original authors. 

The decision to fit binaural hearing loss with bilateral hearing aids is influenced by a number of factors. The recommendation of two hearing aids may be contraindicated for financial reasons or because of near-normal hearing or profound loss in one ear, but the consensus among clinicians is that bilateral amplification is preferable for individuals with aidable hearing loss in both ears. Mencher and Davis examine a variety of considerations that may affect bilateral benefit, including speech intelligibility in noise, localization and directionality, sound quality, tinnitus suppression, binaural integration and auditory deprivation. Research in these areas is discussed with reference to clinical indications for hearing aid fitting.

The authors begin with a clarification of the terms binaural and bilateral. They explain that a bilateral fitting refers to the use of hearing aids on both ears, whereas binaural hearing refers to the integration of signals arriving at two ears independently. They point out that standardization of these terms should help avoid confusion in the discussion of bilateral versus unilateral hearing aid fittings.

Because speech is the most important mode of everyday communication, studies of hearing aid benefit typically employ speech intelligibility measures in quiet and noisy conditions. Early studies investigating aided speech intelligibility yielded conflicting reports, with some in favor of bilateral amplification (Markle & Aber, 1958; Wright & Carhart, 1960; Olsen & Carhart, 1967; Markides, 1980) and others showing no difference between unilateral and bilateral fittings (Hedgecock & Sheets, 1958; DeCarlo & Brown, 1960; Jerger & Dirks, 1961).

Many early studies were criticized for methodological choices that could have obscured bilateral benefits such as the use of a single noise source or test materials that were not representative of everyday, conversational speech. More recent work has examined speech intelligibility under conditions that more closely approximate real-world conditions, with multiple noise sources and sentence-based test materials. For instance, Kobler & Rosenhall (2002) studied intelligibility and localization for randomly presented speech from locations surrounding the listener, in the presence of speech-weighted noise from multiple loudspeakers.  They found that bilateral amplification improved performance over unilateral fittings and unaided conditions. Their findings confirmed earlier work by Kobler, Rosenhall and Hansson (2001) in which bilateral benefits were reported for speech recognition, localization and sound quality; as well as more investigations such as that of McArdle and colleagues (2012).

Sound localization has implications for everyday environmental awareness as well as speech perception. Studies of auditory scene analysis (Bregman, 1990) underscore the importance of localization for identifying and attending to specific sound sources. This has specific relevance for understanding conversation in complex environments in which the speech must first be identified and separated from competing sound sources before higher level processing can occur (Stevens, 1996). Therefore, the effect of hearing aids on localization is likely to impact an  individual’s overall ability to understand conversational speech in a noisy environment.

The physical presence of a hearing aid and earmold obscures some pinna-based localization cues; the use of bilateral hearing aids should aid horizontal localization under some circumstances. Individuals with moderate to severe hearing loss who wear only one hearing aid may hear some sounds only on the aided side, whereas binaural time and intensity cues may be preserved in a bilateral fitting. Current hearing aids with bilateral data exchange that can account for interaural phase and time cues may offer additional binaural localization benefits. Individuals in social situations are likely to be conversing with one or more people at roughly the same vertical elevation. Therefore, preservation of horizontal localization cues with bilateral hearing aids may outweigh the loss of pinna cues and may have more relevance for speech intelligibility, especially in noisy conditions.

Sound quality encompasses a number of attributes that include clarity, fullness, loudness and naturalness. Bilateral hearing aid use may improve the quality of these attributes. Balfour & Hawkins (1992) examined eight sound quality judgments for listeners with mild to moderate hearing loss, tested with unilateral and bilateral hearing aids.  Subjects judged the sound quality of speech in quiet and noise, and music in a test booth, living room and concert hall. Subjects had a significant preference for bilateral hearing aids for all sound quality dimensions, with clarity being ranked as the most important.  This finding is in agreement with Erdman & Sedge (1981) who reported that clarity was the most significant benefit of bilateral amplification. Naidoo & Hawkins (1997) reported bilateral benefits for sound quality and speech intelligibility in high levels of background noise.

Tinnitus suppression is another area in which bilateral hearing aid use appears to offer an advantage over unilateral fittings. A questionnaire by Brooks & Bulmer (1981) found that 66.52% of bilaterally aided respondents experienced reduction in tinnitus versus only 12.7% with unilateral aids. Surr, Montgomery & Mueller (1985) reported that about half of their subjects with tinnitus experienced partial or total relief from tinnitus with hearing aid use. Melin, et al. (1987) found that there were differences in tinnitus relief based on the number of hours of use per day. Taken together, these studies suggest that individuals who suffer from tinnitus may experience from relief with hearing aids and are more likely to do so with consistent, bilateral hearing aid use.

There is evidence to suggest that some individuals experience better results with unilateral fittings. Binaural integration of simultaneous auditory signals in asymmetric hearing loss may have negative implications for bilateral hearing aid use. This was first described by Arkebauer, Mencher & McCall (1971) who reported that amplified signals presented to two asymmetrically impaired ears resulted speech discrimination that was worse than the better ear alone and similar to the poorer ear alone. Hood & Prasher (1990) simulated bilateral hearing loss and found poorer speech discrimination ability when dissimilar distortion patterns were sent to each ear and significant improvement when identical distortion patterns were sent to the two ears. The results were interpreted to suggest that an inability to process incongruent or dissimilar speech input from both ears could contribute to the rejection of two hearing aids.  Jerger et al. (1993) reported similar findings and explained that stimulation of the poor ear was interfering with the response of the better ear. He posited that binaural interference could affect approximately 10% of elderly hearing aid users. Binaural interference may be caused by age-related atrophy or corpus collosum demyelination resulting in poor inter-hemispheric transfer of auditory information (Chmiel et al. 1997) and individuals experiencing binaural interference may be likely to perform better with one hearing aid.

Auditory deprivation is commonly cited when recommending bilateral hearing aids. First described in 1984 by Silman, Gelfand and Silverman, it was noted that in unilateral fittings on bilaterally impaired individuals, speech discrimination in the unaided ear was reduced relative to the aided ear. Pure tone thresholds and speech reception thresholds were not affected. Gelfand, Silman & Ross (1987) again found reduced speech recognition scores over time (4-17 years) for unilaterally aided individuals. Hurley (1999) also reported that unilaterally aided subjects were more likely to experience monaural reductions in word recognition scores compared to bilaterally aided subjects. Subsequent studies determined that the auditory deprivation effect was reversible and subjects who were later fit with a second aid experienced improved word recognition (Silverman & Silman, 1990; Silverman & Emmer, 1993; Silman, et al., 1992).  Byrne & Dirks (1996) expanded on the concept of auditory deprivation, reporting that it may also affect localization and intensity discrimination. Though more research in this area may be warranted, Mencher & Davis note that the best way to treat auditory deprivation is to avoid it, with bilateral amplification as part of the solution.

Though research provides insight into possible predictors of success, an important measure of success can be found in post-fitting reports of unilateral and bilateral hearing aid users.  Self-assessments of hearing handicap and disability can help hearing aid users express how their hearing aids affect important activities and everyday communication. Some investigations using self-assessment techniques have revealed a preference for bilateral hearing aid use (Chung & Stephens, 1983, 1986; Stephens et al, 1991) whereas others have revealed a preference for unilateral fittings (Cox, 2011).  Because many factors contribute to an individual’s preferences and perceived success, self-assessments should be used in combination with verification measures and consideration of individual attributes, such as age, experience with hearing aids, audiometric configuration and speech discrimination ability. Most patient questionnaires target specific topics such as satisfaction, comfort, usage patterns and speech intelligibility, so it may be useful to combine measures to gain comprehensive information about a patient’s experience. The Speech, Spatial and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004), for instance, measures hearing disability in a variety of circumstances. Because it also examines directional, distance and spatial perception, the SSQ may provide more insight into the effect of bilateral versus unilateral amplification on hearing in everyday situations.

Mencher and Davis’s review suggests that there are numerous likely benefits to bilateral amplification for most, but not necessarily all, individuals. Bilateral hearing aids may offer an improved signal-to-noise ratio, reduced annoyance from tinnitus, improved sound quality and better localization in complex listening environments. Hearing aids are typically dispensed with a trial period of 30-60 days with relatively low financial risk to the patient in the event of a return. Therefore, it seems sensible to recommend bilateral fittings for candidates with bilateral hearing loss, knowing that a return of one hearing aid will be possible if contraindications to bilateral hearing aid use should arise during the trial period. Close monitoring of performance and comfort during the trial is essential, especially for individuals with asymmetrical hearing loss or a history of unilateral hearing aid use. In these cases, it may be necessary to reduce gain and output or increase compression in the poorer or previously unaided ear, to accommodate the likely inter-aural differences in acclimatization rate. Ears with more hearing loss and/or less aided experience generally take more time to adapt to amplification, so gradual adjustment and focused counseling may be necessary to eventually achieve satisfactory binaural balance.

The authors conclude by pointing out that the only way to know if a patient is successful with their hearing aids is to ask them! The interaction of several factors such as sound quality, localization, noise tolerance, loudness discomfort and physical comfort will contribute to patient satisfaction. Ultimately, clinicians should develop a clinical strategy that employs objective and subjective measures to truly document benefit and satisfaction with the hearing aid fitting—be it unilateral or bilateral.

References

Arkebauer, H.J., Mencher, G.T. & McCall, C. (1971). Modification of speech discrimination in patients with binaural asymmetrical hearing loss. Journal of Speech and Hearing Disorders 36, 208-212.

Balfour, P.B. & Hawkins, D.B. (1992). A comparison of sound quality judgments for monaural and binaural hearing aid processed stimuli. Ear and Hearing 13, 331-339.

Bregman, A.S. (1990). Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, Mass.: Bradford Books, MIT Press.

Brooks, D.N. & Bulmer, D. (1981). Survey of Binaural Hearing Aid Users. Ear and Hearing 2, 220-224.

Byrne, D., Noble, W. & Lepage, B. (1992). Effects of long-term bilateral and unilateral fitting of different hearing aid types on the ability to locate sounds. Journal of the American Academy of Audiology 3, 369-382.

Chmiel, R., Jerger, J., Murphy, E., Pirozzolo, F. & Tooley-Young, C. (1997). Unsuccessful use of binaural amplification by an elderly person. Journal of the American Academy of Audiology 8, 1-10.

Chung, S. & Stephens, S.D.G. (1983). Binaural hearing aid use and the hearing measurement scale. IRCS Medical Science, 11:721-722. In W. Noble, 1998. Self-Assessment of Hearing and Related Functions, (London: Whurr).

Chung, S. & Stephens, S.D.G. (1986). Factors influencing binaural hearing aid use. British Journal of Audiology 20, 129-140.

Cox, R.M., Schwartz, K.S., Noe, C.M. & Alexander, G.C. (2011). Preference for one or two hearing aids among adult patients. Ear and Hearing 32 (2), 181-197

DeCarlo, L.M. & Brown, W.J. (1960). The effectiveness of binaural hearing for adults with hearing impairment. Journal of Auditory Research 1, 35-76.

Erdman, S. & Sedge, R. (1981). Subjective comparisons of binaural versus monaural amplification. Ear and Hearing 2, 225-229.

Gatehouse, S. & Noble, W. (2004). The Speech, Spatial and Qualities of Hearing Scale (SSQ). International Journal of Audiology 43, 85-99.

Gelfand, S., Silman, S. & Ross, L. (1987). Long term effects of monaural, binaural and no amplification in subjects with bilateral hearing loss. Scandinavian Audiology 16, 201-207.

Hebrank, J. & Wright, D. (1974). Spectral cues used in the localization of sound sources on the median plane. Journal of the Acoustical Society of America 56, 1829-1834.

Hedgecock, L.D.  & Sheets, B.V. (1958).  A comparison of monaural and binaural hearing aids for listening to speech. Archives of Otolaryngology 68, 624-629.

Hood, J.D. & Prasher, D.K. (1990). Effect of simulated bilateral cochlear distortion on speech discrimination in normal subjects. Scandinavian Audiology 19, 37-41.

Hurley, R.M. (1999) . Onset of auditory deprivation. Journal of the American Academy of Audiology 10, 529-534.

Jerger, J. & Dirks, D. (1961). Binaural hearing aids: An enigma. Journal of the Acoustical Society of America 33, 537-538.

Jerger, J., Silman, S., Lew, H.L. & Chmiel, R. (1993). Case studies in binaural interference: converging evidence from behavioral and electrophysiological measures. Journal of the American Academy of Audiology, 122-131.

Kobler, S., Rosenhall, U., & Hansson, H. (2001) . Bilateral hearing aids – effects and consequences from a user perspective. Scandinavian Audiology 30, 223-235.

Kubler, S. & Rosenhall, U. (2002). Horizontal localization and speech intelligibility with bilateral and unilateral hearing aid amplification. International Journal of Audiology 41, 392-400.

Markides,  A. (1980). Binaural Hearing Aids. NY: Academic Press.

Markle,  D.M. & Aber, W. (1958). A clinical evaluation of monaural and binaural hearing aids. Archives of Otolaryngology 67, 606-608.

McArdle, R., Killion, M., Mennite, M. & Chisolm, T. (2012).  Are Two Ears Not Better Than One? Journal of the American Academy of Audiology 23, 171-181.

Mehrgardt, S. & Mellert, V. (1977). Transformational characteristics of the external human ear. Journal of the Acoustical Society of America 61, 1567-1576.

Melin, L., Scott, B., Lindberg, P. & Lyttkens, L. (1987). Hearing aids and tinnitus – an experimental group study. British Journal of Audiology 21, 91-97.

Mencher, G.T. & Davis, A. (2006). Bilateral of unilateral amplification: is there a difference? A brief tutorial. International Journal of Audiology 45 (S1), S3-11.

Middlebrooks, J.C. & Green, D.M. (1991). Sound localization by human listeners. Annual Review of Psychology 42, 135-159.

Naidoo, S.V. & Hawkins, D.B. (1997). Monaural/binaural preferences: effect of hearing aid circuit on speech intelligibility and sound quality. Journal of the American Academy of Audiology 8, 188-202.

Noble, W., Sinclair, S. & Byrne, D. (1998). Improvement in aided sound localization with open earmolds: Observations in people with high-frequency hearing loss. Journal of the American Academy of Audiology 9, 25-34.

Olsen, W.R. & Carhart, R. (1967). Development of test procedures for evaluation of binaural hearing aids. Bulletin of Prosthetics Research 10, 22-49.

Searle, C., Braida, L., Cuddy, D. & Davis, M. (1975).  Binaural pinna disparity: Another auditory localization cue. Journal of the Acoustical Society of America 57, 448-455.

Silverman, C & Emmer, M.B. (1993). Auditory deprivation and recovery in adults with asymmetric sensorineural hearing impairment. Journal of the American Academy of Audiology 4, 338-346.

Silverman, C. & Silman, S. (1990).  Apparent auditory deprivation from monaural amplification and recovery with binaural amplification: 2 case studies. Journal of the American Academy of Audiology 1, 175-180.

Silman, S., Gelfand, S. & Silverman, C. (1984). Late-onset auditory deprivation: effects of monaural versus binaural hearing aids. Journal of the Acoustical Society of America 76, 1357-1362.

Silman, S., Silverman, C.A., Emmer, M.B. & Gelfand, S. (1992).  Adult-onset auditory deprivation. Journal of the American Academy of Audiology 3, 390-396.

Stephens, S.D., Callaghan, D.E., Hogan, S., Meredith, R., Rayment, A. & Davis, A.C. (1991) . Acceptability of binaural hearing aids: a crossover study. Journal of the Royal Society of Medicine 84, 267-269.

Stevens, K. (1996). Amplitude-modulated and unmodulated time-varying sinusoidal sentences: the effects of semantic and syntactic context. Doctoral dissertation, Northwestern University (University of Michigan Press, AAT 9632785).

Surr, R.K., Montgomery, A.A. & Mueller, H.G. (1985). Effect of amplification on tinnitus among new hearing aid users. Ear and Hearing 6, 71-75.

Wright, H.N. & Carhart, R. (1960). The efficiency of binaural listening among the hearing impaired. Archives of Otolaryngology 72, 789-797.

Can preference for one or two hearing aids be predicted?

Noble, W. (2006). Bilateral hearing aids: A review of self-reports of benefit in comparison with unilateral fitting. International Journal of Audiology, 45(Supplement 1), S63-S71.

This editorial discusses the clinical implications of an independent research study. This editorial does not represent the opinions of the original authors.

The potential benefits of bilateral and unilateral hearing aids have been debated for years. Laboratory studies and clinical recommendations generally support the use of two hearing aids for individuals with bilateral hearing loss. Yet some field studies have produced equivocal reports. In his 2006 survey of bilateral and unilateral clinical field trials, William Noble discusses variables contributing to the lack of consensus and addresses a couple of commonly cited clinical rationales for bilateral hearing aid use. Though subject population, experimental design, degree of hearing loss and usage patterns vary from one study to another, factors emerge that help determine likelihood of success in unilateral and bilateral hearing aid fitting.

The strong predisposition for clinicians to recommend bilateral hearing aid use may be based both on laboratory findings as well as common sense. Several studies have reported advantages of binaural listening (Dillon, 2001; McArdle et al., 2012), but clinicians often support their recommendation of two hearing aids with an analogy to binocular vision.  Individuals with impaired vision no longer wear monocles (with apologies to English detectives) but instead opt for binocular corrective lenses. Noble argues that the visual analogy is not apt, partly because typical vision loss is not comparable to the typical hearing loss. Rather, vision loss that is treated by corrective lenses is most similar to a mild conductive hearing loss that is rarely treated with hearing aids. Cochlear receptor damage in sensorineural hearing loss, the most common type of hearing loss treated with hearing aids, introduces processing complexities that may not be adequately corrected by hearing aids that cannot address the auditory deficits.

Though Noble’s comments are correct, the visual analogy is presented to hearing aid patients in an effort to explain how two hearing aids allow more effective use of bilateral listening cues, much as two corrective lenses can aid binocular, stereoscopic cues for depth and three-dimensional perception.  Some patients may have difficulty understanding how two hearing aids can provide beneficial cues, especially in noise, instead of additional distraction. The visual analogy, while admittedly not perfect, is a way of explaining more simply the benefit of bilateral perceptual cues.

The other commonly cited clinical rationale for bilateral hearing aid use is related to the auditory deprivation effect (Silman et al., 1984). In individuals with bilateral hearing loss, there is concern that if only one hearing aid is used, the unaided ear will be deprived of sound and suffer additional deterioration. This appears to be a long term change in the unaided ear, affecting word recognition scores but not pure tone or speech reception thresholds.  Whether or not the unaided ear effect has implications for everyday hearing aid use and the ability to function in social and work-related situations requires further investigation to determine whether it is an important consideration for clinical recommendations.

Though most laboratory studies support the benefits of binaural listening, field studies and self-reports on bilateral hearing aid use have not always provided similar outcomes (Arlinger et al., 2003; Cox, 2011). For this reason, Noble reported on evidence from clinical trials to determine the conditions under which bilateral hearing aid use is most likely to be beneficial and to determine what patient attributes most support the recommendation of bilateral hearing aid use. Three of the reviewed studies were retrospective or reports from clinical patients occurring months or years after being fitted with unilateral or bilateral hearing aids. Two studies (Dirks & Carhart, 1962; Kochkin & Kuk, 1997) suggested that people who preferred bilateral hearing aid fittings had greater levels of hearing loss, though degree of hearing loss was not controlled. A third study by Noble, et al (1995) carefully matched 17 sets of unilateral and bilateral users according to degree of hearing loss. They examined speech reception and directional and distance spatial perception in aided and unaided conditions. Significant benefits were seen when comparing aided and unaided conditions but no differences were observed between unilaterally aided and bilaterally aided groups. The subject sample had mild to moderate hearing losses, so some caution should be taken when extrapolating these findings to individuals with more severe hearing losses.

In contrast, a study of new hearing aid users resulted in a two-to-one preference for unilateral use after a six-month period (Schreurs & Olsen, 1985). Individuals in this study wore one aid and two aids alternately for one week at a time, which arguably could have adversely influenced their acclimatization. Hearing aid users experience a sometimes extended period of adjustment to amplification (Keidser, 2009) which can affect their subjective judgments of sound quality and overall benefit (Bentler, et al, 1993). For instance, occlusion and unnatural perception of one’s own voice are qualities that can be annoying to new hearing aid users and are often more pronounced with bilateral hearing aids. Though these qualities almost always improve significantly with consistent bilateral use, it is not surprising that inexperienced, intermittent bilateral users might prefer the subjective sound quality of wearing one hearing aid at a time.  Additionally, Schreurs & Olsen’s study was conducted in 1985, at which time directional microphones were not in widespread use. Hearing aid users at that time often removed one hearing aid in noisy situations because bilateral omnidirectional microphones made surrounding noise sources too disruptive.  A field study of unilateral versus bilateral use with modern hearing aids, allowing for adequate acclimatization, might yield different results.

Two follow-up studies examined subjects that were slightly younger than a typical clinical population (Brooks & Bulmer, 1981; Erdman & Sedge, 1981). Both of these studies found that a majority of subjects preferred the use of two hearing aids.  These reports suggest that individuals whose activities require effective communication in challenging listening situations may prefer bilateral hearing aids. The remaining studies in Noble’s review were crossover studies, or experiments in which clinical patients were randomly assigned to a unilateral or bilateral condition and crossed over to the other condition after several weeks.  Stephens, et al (1991) found a greater degree of hearing loss and self-rated disability in the individuals who opted for bilateral hearing aid use; consistent with the retrospective reports of Dirks & Carhart (1962) and Kochkin & Kuk (1997) discussed earlier.

The studies examined in this literature review reveal several patterns. First, individuals with more severe hearing loss or perceived disability were more likely to prefer bilateral hearing aids. Subjects who preferred unilateral hearing aid use tended to have mild to moderate losses.  Second, participants employed in dynamic listening situations preferred bilateral hearing aid use, suggesting that individuals whose regular activities require effective communication in a variety of contexts may be more likely to benefit from the use of two hearing aids (Noble & Gatehouse, 2006).

Noble points out that laboratory studies cannot adequately consider the range of experiences encountered by hearing aid users in everyday situations. Laboratory research isolates variables for study, with subjects responding to specific stimuli in isolated, carefully contrasted conditions, whereas in everyday life, hearing aid users encounter a wide range of listening situations ranging from single speech sources in quiet conditions to multiple speech sources in the presence of competing noise. Conversely, clinical field trials probe the subjective responses of hearing aid users in a variety of real-world situations, but it can be difficult to extricate the specific variables affecting their perceptions. Though their outcomes may not always appear to be in agreement, both types of study provide useful information to guide clinical practice.

It is clear that bilateral hearing aid use has potential to reduce listening effort (Feuerstein, 1992), improve speech understanding, localization and receptiveness to lateral sounds (Noble & Gatehouse, 2006). Still, field studies consistently report a subset of patients that preference for unilateral hearing aid use. Whether environmentally or psychoacoustically motivated, the factors that underlie these preferences remain unclear. With consideration to documented benefits, bilateral hearing loss should first be treated with the prescription of bilateral hearing aids. Consideration for unilateral use should happen after the patient has adequate field experience and expresses subjective preference for the option of unilateral amplification.

References

Arlinger, S., Brorsson, B., Lagerbring, C., Leijon, A., Rosenhall, U. & Schersten, T. (2003). Hearing Aids for Adults – benefits and costs. Stockholm: Swedish Council on Technology Assessment in Health Care.

Bentler, R.A., Niebuhr, D.P., Getta, J.P. & Anderson, C.V. 1993b. Longitudinal study of hearing aid effectiveness. II. Subjective measures. Journal of Speech and Hearing Research 36, 820-831.

Byrne, D., Noble, W. & LePage, B. (1992). Effects of long0term bilateral and unilateral fitting of different hearing aid types on the ability to locate sounds. Journal of the American Academy of Audiology 3, 369-382.

Cox, R.M., Schwartz, K.S., Noe, C.M. & Alexander, G.C. (2011). Preference for one or two hearing aids among adult patients. Ear and Hearing 32 (2), 181-197.

Dillon, H. (2001). Monaural and binaural considerations in hearing aid fitting. In: Dillon, H., ed. Hearing Aids. Turramurra, Australia: Boomerang Press, 370-403.

Dirks, D. & Carhart, R. (1962). A survey of reactions from users of binaural and monaural hearing aids. Journal of Speech and Hearing Disorders 27(4), 311-322.

Feuerstein, J.F. (1992). Monaural versus binaural hearing: Ease of listening, word recognition and attentional effort. Ear & Hearing 13(2), 80-86.

Keidser, G., O’Brien, A., Carter, L., McLelland, M. & Yeend, I. (2009).  Variation in preferred gain with experience for hearing-aid users. International Journal of Audiology 2008 (47), 621-635.

Kochkin, S. & Kuk, F. (1997). The binaural advantage: Evidence from subjective benefit and customer satisfaction data. The Hearing Review, 4.

McArdle, R., Killion, M., Mennite, M. & Chisolm, T. (2012).  Are Two Ears Not Better Than One? Journal of the American Academy of Audiology 23, 171-181.

Noble, W. (2006). Bilateral hearing aids: A review of self-reports of benefit in comparison with unilateral fitting. International Journal of Audiology, 45(Supplement 1), S63-S71.

Noble, W. & Gatehouse, S. (2006). Effects of bilateral versus unilateral hearing aid fitting on abilities measured by the Speech, Spatial and Qualities of Hearing Scale (SSQ). International Journal of Audiology 45(2), 172-181.

Noble, W., TerHorst, K. & Byrne, D. (1995). Disabilities and handicaps associated with impaired auditory localization. Journal of the American Academy of Audiology 6(2), 129-140.

Silman, S., Gelfand, S. & Silverman, C. (1984). Late-onset auditory deprivation: Effects of monaural versus binaural hearing aids. Journal of the Acoustical Society of America 76, 1357-1362.

True or False? Two hearing aids are better than one.

McArdle, R., Killion, M., Mennite, M. & Chisolm, T. (2012).  Are Two Ears Not Better Than One? Journal of the American Academy of Audiology 23, 171-181.

This editorial discusses the clinical implications of an independent research study. This editorial does not represent the opinions of the original authors.

Audiologists are accustomed to recommending two hearing aids for individuals with bilateral hearing loss, based on the known benefits of binaural listening (Carhart, 1946; Keys, 1947; Hirsh, 1948; Koenig, 1950), though the potential advantages of binaural versus monaural amplification have been debated for many years.

One benefit of binaural listening, binaural squelch, occurs when the signal and competing noise come from different directions (Kock, 1950; Carhart, 1965). When the noise and signal come from different directions, time and intensity differences cause the waveforms arriving at each ear to be different, resulting in a dichotic listening situation. The central auditory system is thought to combine these two disparate waveforms and essentially subtract the waveform arriving at one side from that of the other, resulting in an effective SNR improvement of about 2-3dB (Dillon, 2001).

Binaural redundancy, another potential benefit of listening with two ears, is an advantage created simply by receiving similar information in both ears. Dillon (2001) describes binaural redundancy as allowing the brain to get two “looks” at the same sound, resulting in SNR improvement of another 1-2 dB (MacKeith & Coles, 1971; Bronkhorst & Plomp, 1988).

Though the benefits of binaural listening would imply benefits of binaural amplification as well, there has been a lack of consensus among researchers. Some studies have reported clear advantages to binaural amplification over monaural fittings, but others have not. Decades ago a number of studies were published on both sides of the argument, but differences in outcomes may have been related to speaker location and the presentation angles of the speech and noise signals (Ross, 1980) so the potential advantages of binaural amplification were still unclear.

Some recent reports have supported the use of monaural amplification over binaural for some individuals, in objective and subjective studies. Henkin et al. (2007) reported that 71% of their subjects performed better on a speech-in-noise task when fitted with one hearing aid on the “better” ear than when fitted with two hearing aids. Cox et al. (2011) reported that 46% of their subjects preferred to use one hearing aid rather than two.

In contrast, a report by Mencher & Davis (2006) concluded that 90% of adults perform better with two hearing aids. They explained that 10% of adults may have experienced negative binaural interaction or binaural interference, which is described as the inappropriate fusion of signals received at the two ears (Jerger et al., 1993; Chmiel et al., 1997).

The phenomenon of binaural interference and the potential advantage of monaural amplification was investigated by Walden & Walden (2005). In a speech recognition in noise task in which speech and the competing babble were presented through a single loudspeaker at 0-degrees azimuth, they found that performance with one hearing aid was better than binaural for 82% of their participants. This is in contrast to Jerger’s (1993) report of an incidence of 8-10% for subjects that might have experienced binaural interference. One criticism of Walden & Walden’s study is that their “monaural” condition left the unaided ear open. Their presentation level of 70dB HL and the use of subjects with mild to moderate hearing loss indicates that subjects were still receiving speech and noise cues in the unaided ear, resulting in an albeit modified, binaural listening situation. Furthermore, their choice of one single loudspeaker for presentation of noise and speech directly in front of the listener created a diotic listening condition, which eliminated the use of binaural head shadow cues. This methodology may have limited their study’s relevance to typical everyday situations in which listeners are engaged in face to face conversation with competing noise all around.

Because the potential advantages or disadvantages of binaural amplification have such important clinical implications, Rachel McArdle and her colleagues sought to clarify the issue with a two-part study of monaural and binaural listening. The first experiment was an effort to replicate Walden and Walden’s 2005 sound field study, this time adding a true monaural condition and an unaided condition. The second experiment examined monaural versus diotic and dichotic listening conditions, using real-world recordings from a busy restaurant.

Twenty male subjects were recruited from the Bay Pines Veteran’s Affairs Medical Facility. Subjects ranged in age from 59 to 85 years old and had bilateral, symmetrical hearing losses. All were experienced users of binaural hearing aids.

For the first experiment, subjects wore their own hearing aids, so a variety of models from different manufacturers were represented. Hearing aids were fitted according to NAL-NL1 prescriptive targets and were verified with real-ear measurements. All of the hearing aids were multi-channel instruments with directional microphones, noise reduction and feedback management. None of the special features were disabled during the study.

Subjects were tested in sound field, with a single loudspeaker positioned 3 feet in front of them. They were tested under five conditions: 1) right ear aided, left ear open, 2) left ear aided, right ear open, 3) binaurally aided, 4) right ear aided, left ear plugged (true monaural) and 5) unaided. The QuickSIN test (Killion et al, 2004) was used to evaluate sentence recognition in noise in all of these conditions. The QuickSIN test yields a value for “SNR loss”, which represents the SNR required to obtain a score of 50% correct for key words in the sentences.

The primary question of interest for the first experiment asked whether two aided ears would achieve better performance than one aided ear. The results showed that only 20% of the participants performed better with one aid, whereas 80% performed better with binaural aids. The lowest SNR loss values were for the binaural condition, followed by right ear aided, left ear aided, true monaural (with left ear plugged) and unaided. Analysis of variance revealed that the binaural condition was significantly better than all other conditions. The right-ear only condition was significantly better than unaided, but all other comparisons failed to reach significance.

The results of Experiment 1 are comparable to results reported by Jerger (1993) but contrast sharply with Walden and Walden’s 2005 study, in which 82% of respondents performed better monaurally aided.  To compare their results further to Walden and Walden’s, McArdle and her colleagues compiled scores for the subjects’ better ears and found that there was no significant difference between binaural and better ear performance, but both of these conditions were significantly better than the other conditions. They also examined the effect of degree of hearing loss and found that individuals with hearing thresholds poorer than 70dB HL were able to achieve about twice as much improvement from binaural amplification as those subjects with better hearing. Still, the results of Experiment 1 support the benefit of binaural hearing aids for most participants, especially those with poorer hearing.

The purpose of Experiment 2 was to further examine the potential benefit of hearing with two ears, using diotic and dichotic listening conditions. Diotic listening refers to a condition in which the listener receives the same stimulus in both ears, whereas dichotic listening refers to more typical real-world conditions in which each ear receives slightly different information, subject to head shadow and time and intensity differences.

Speech recognition was evaluated in four conditions: 1) monaural right, 2) monaural left, 3) diotic and 4) binaural or dichotic. Materials for the R-SPACE QSIN test (Revit, et al., 2007) were recorded through a KEMAR manikin with competing restaurant noise presented through eight loudspeakers. Recordings were taken from eardrum-position microphones on each side of KEMAR, thus preserving binaural cues that would be typical for a listener in a real-world setting.

In Experiment 2, subjects were not tested wearing hearing aids; the stimuli were presented via insert earphones. The use of recorded stimuli presented under earphones eliminated the potentially confounding factor of hearing aid technology on performance and allowed the presentation of real-world recordings in truly monaural, diotic and dichotic conditions.

The best performance was demonstrated in the binaural condition, followed by the diotic condition, then the monaural conditions. The binaural condition was significantly better than diotic and both the diotic and dichotic conditions were significantly better than the monaural conditions. Again in contrast to Walden and Walden’s study, 80% of the subjects scored better in the binaural condition than either of the monaural conditions and 65% of the subjects scored better in the diotic condition than either monaural condition. These results support the findings of the first experiment and indicate that for the majority of listeners, speech recognition in noise improves when two ears are listening instead of one. Furthermore, the finding that the binaural condition was significantly better than the diotic condition indicates that it is not only the use of two ears, but also the availability of binaural cues that have a positive impact on speech recognition in competing noise.

McArdle and her colleagues point out that their study, as well as other recent reports (Walden & Walden, 2005; Henkin, 2007), suggests that the majority of listeners perform better on speech-in-noise tasks when they are listening with two ears. When binaural time and intensity cues are available, performance is even better than with the same stimulus reaching each ear.  They also found that the potential benefit of binaural hearing was even more pronounced for individuals with more severe hearing loss. This supports the recommendation of binaural hearing aids for individuals with bilateral hearing loss, especially those with severe loss.

Cox et al (2011) reported that listeners who experienced better performance in everyday situations tended to prefer binaural hearing aid use, but also found that 43 out of 94 participants generally preferred monaural to binaural use over a 12-week trial. For new hearing aid users or prior monaural users, this is not surprising, as it can take time to adjust to binaural hearing aid use. Clinical observation suggests that individuals who have prior monaural hearing aid experience may have more difficulty adjusting to binaural use than individuals who are new to hearing aids altogether.  However, with consistent daily use, reasonable expectations and appropriate counseling, most users can successfully adapt to binaural use. It is possible that the subjects in Cox et al’s study who preferred monaural use were responding to factors other than performance in noise. If they were switching between monaural and binaural use, perhaps they did not wear the two instruments together consistently enough to fully acclimate to binaural use in the time allotted.

Though their study presented strong support for binaural hearing aid use, McArdle and her colleagues suggest that listeners may benefit from “self-experimentation” to determine the optimal configuration with their hearing aids. This suggestion is an excellent one, but it may be most helpful within the context of binaural use. Even patients with adaptive and automatic programs can be fitted with manually accessible programs designed for particularly challenging situations and should be encouraged to experiment with these programs to determine the optimal settings for their various listening needs.

Clinicians who have been practicing for several years may recall the days when hearing aid users often lost their hearing aids in restaurants because they had removed one aid in order to more easily ignore background noise. That is less likely to occur now, as current technology can help most hearing aid users function quite well in noisy situations. With directional microphones and multiple programs, along with the likelihood that speech and background noise are often spatially separated, binaural hearing aids are likely to offer advantageous performance for speech recognition in most acoustic environments. Bilateral data exchange and wireless communication offer additional binaural benefits, as two hearing instruments can work together to improve performance in noise and provide binaural listening for telephone or television use.

References

Bronkhorst, A.W. & Plomp, R. (1988). The effect of head induced interaural time and level differences on speech intelligibility in noise. Journal of the Acoustical Society of America 83, 1508-1516.

Carhart, R. (1965). Problems in the measurement of speech discrimination. Archives of Otolaryngology 82, 253-260.

Carhart, R. (1946). Selection of hearing aids. Archives of Otolaryngology 44, 1-18.

Chmiel, R., Jerger, J., Murphy, E., Pirozzolo, R. & Tooley, Y.C. (1997). Unsuccessful use of binaural amplification by an elderly person. Journal of the American Academy of Audiology 8, 1-10.

Cox, R.M., Schwartz, K.S., Noe, C.M. & Alexander, G.C. (2011). Preference for one or two hearing aids among adult patients. Ear and Hearing 32 (2), 181-197.

Dillon, H. (2001). Monaural and binaural considerations in hearing aid fitting. In: Dillon, H., ed. Hearing Aids. Turramurra, Australia: Boomerang Press, 370-403.

Henkin, Y., Waldman, A. & Kishon-Rabin, L. (2007). The benefits of bilateral versus unilateral amplification for the elderly: are two always better than one? Journal of Basic and Clinical Physiology and Pharmacology 18(3), 201-216.

Hirsh, I.J. (1948). Binaural summation and interaural inhibition as a function of the level of masking noise. American Journal of Psychology 61, 205-213.

Jerger, J., Silman, S., Lew, J. & Chmiel, R. (1993). Case studies in binaural interference: converging evidence from behavioral and electrophysiologic measures. Journal of the American Academy of Audiology 4, 122-131.

Keys, J.W. (1947). Binaural versus monaural hearing. Journal of the Acoustical Society of America 19, 629-631.

Killion, M.C., Niquette, P.A., Gudmundsen, G.I., Revit, L.J. & Banerjee, S. (2004). Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal hearing and hearing-impaired listeners. Journal of the Acoustical Society of America 116, 2395-2405.

Kock, W.E. (1950). Binaural localization and masking. Journal of the Acoustical Society of America 22, 801-804.

Koenig, W. (1950). Subjective effects in binaural hearing. [Letter to the Editor] Journal of the Acoustical Society of America 22, 61-62.

MacKeith, N.W. & Coles, R.A. (1971). Binaural advantages in hearing speech. Journal of Laryngology and Otology 85, 213-232.

McArdle, R., Killion, M., Mennite, M. & Chisolm, T. (2012).  Are Two Ears Not Better Than One? Journal of the American Academy of Audiology 23, 171-181.

Mencher, G.T. & Davis, A. (2006) Binaural or monaural amplification: is there a difference? A brief tutorial. International Journal of Audiology 45, S3-S11.

Revit, L., Killion, M. & Compton-Conley, C. (2007). Developing and testing a laboratory sound system that yields accurate real-world results. Hearing Review online edition, www.hearingreview.com, October 2007.

Ross, M. (1980). Binaural versus monaural hearing aid amplification for hearing impaired individuals. In: Libby, E.R., Ed. Binaural Hearing and Amplification. Chicago: Zenetron, 1-21.

Walden, T.C. & Walden, B.E. (2005). Monaural versus binaural amplification for adults with impaired hearing. Journal of the American Academy of Audiology 16: 574-584.

Cochlear Dead Regions and High Frequency Gain: How to Fit the Hearing Aid

Cox, R.M., Johnson, J.A. & Alexander, G.C. (2012). Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss. Ear and Hearing, May 2012, e-published ahead of print.

This editorial discusses the clinical implications of an independent research study. This editorial does not represent the opinions of the original authors.

Cochlear dead regions (DRs) are defined as a total loss of inner hair cell function across a limited region of the basilar membrane (Moore, et al., 1999b). This does not result in an inability to perceive sound at a given frequency range, rather the sound is perceived via a spread of excitation to adjacent regions in the cochlea where the inner hair cells are still functioning. Because the response is spread out over a broader tonotopic region, patients with cochlear dead regions may perceive some high frequency pure tones as “clicks”, “buzzes” or “whooshes” rather than tones.

Dead regions can be present at moderate hearing thresholds (e.g. 60dBHL) and are more likely to be present as the degree of loss increases. Psychophysical tuning curves are the preferred method for identifying cochlear dead regions in the laboratory, complicated and time consuming. Moore and his colleagues developed the Threshold Equalizing Noise (TEN) Test as a clinical means of identifying dead regions (Moore et al., 2000; Moore et al., 2004). The TEN procedure looks for shifts in masked thresholds beyond what would typically be expected for a given hearing loss.  A threshold obtained with TEN masking noise that shifts at least 10dB indicates the likely presence of a cochlear dead region.

Historically, there has been a lack of consensus among clinical investigators as to whether or not high frequency gain is beneficial for hearing aid users who have cochlear dead regions. Some studies suggest that high frequency gain could have deleterious effects on speech perception and should be limited for individuals with cochlear dead regions (Moore, 2001b; Turner, 1999; Padilha et al., 2007). For example, Vickers et al. (2001) and Baer et al. (2002) studied the benefit of high frequency amplification in quiet and noise for individuals with and without DRs. Both studies reported that individuals with DRs were unable to benefit from high frequency amplification. While Gordo & Iorio (2007) found that hearing aid users with DRs performed worse with high-frequency amplification than they did without it.

In contrast, Cox and her colleagues (2011) found beneficial effects of high frequency audibility whether or not the participants had dead regions. Others have reported equivalent performance for participants with and without dead regions for quiet and low noise conditions; however, in high noise conditions the individuals without dead regions demonstrated further improvement when additional high frequency amplification was provided, whereas participants with dead regions did not (Mackersie et al., 2004). This article is summarized in more detail here: http://blog.starkeypro.com/bid/66368/Recommendations-for-fitting-patients-with-cochlear-dead-regions

The current study was undertaken to examine the subjective and objective effect of high frequency amplification on matched pairs of participants (with and without DRs) in a variety of conditions. Participants were fitted with hearing aids that had two programs: the first (NAL) was based on the NAL-NL1 formula and the second (LP) was identical to the NAL-NL1 program below 1000Hz, with amplification rolled off above 1000Hz.  The goals of the study were to compare performance with these two programs, for individuals with and without dead regions. The following measures were conducted:

1) Speech discrimination in quiet laboratory conditions

2) Speech discrimination in noisy laboratory conditions

3) Subjective performance in everyday situations

4) Subjective preference for everyday situations

Participants were recruited from a pool of individuals who had previously been identified as typical hearing aid patients (Cox et. al., 2011). Participants had bilateral flat or sloping sensorineural hearing loss with thresholds above 25dB below 1kHz and thresholds of 60 to 90dB HL for at least part of the frequency range of 1-3kHz.

The TEN test (Moore et al., 2004) was administered to determine the presence of DRs. To be eligible for the study, participants needed to have one or more DRs in the better ear at or above 1kHz and no DRs below 1kHz. Participants were then divided into to two groups: the experimental group with DRs and the control group without DRs.  Individuals in the experimental group showed a diverse range of DR distribution across frequency. Almost half of the participants had DRs between 1-2kHz, whereas the remainder had DRs only at or above 3kHz. A little more than half of the participants had one DR only, whereas the others had more than one DR.

Individuals in the experimental group were matched in pairs with individuals from the control group. In total, there were 18 participant pairs; each matched for age, degree of hearing loss and slope of hearing loss. There were 24 men and 12 women. No attempt was made to match pairs based on gender.

Participants were fitted monaurally with behind-the-ear hearing aids coupled to vented skeleton earmolds. The monaural fitting was chosen to avoid complications when participants switched between the NAL and LP programs. Data collection was completed before the widespread availability of wireless hearing aids, so the participants would have had to reliably switch both hearing aids individually to the proper program every time to avoid making occasional subjective judgments based on mismatched programs.

The hearing aids had two programs: a program based on the NAL-NL1 prescription (NAL) and a program with high-frequency roll-off (LP). Participants were able to switch the programs themselves but could not identify the programs as NAL or LP. Half of the participants had NAL in P1 and LP in P2, whereas the other half had LP in P1 and NAL in P2.  Verification measures were conducted to ensure that the two programs matched below 1kHz and to make sure the participants judged the programs to be equally loud.

After a two week acclimatization period, participants returned for speech recognition testing and field trial training. Speech and noise stimuli were presented in a sound field with the unaided ear was plugged during testing. Speech recognition in quiet was evaluated using the Computer Assisted Speech Perception Assessment (CASPA; Mackersie, et al., 2001).  The CASPA test includes lists of 10 consonant-vowel-consonant words spoken by a female. Five lists were presented for each of the NAL and LP programs. Stimuli were presented at 65dB SPL.

Speech recognition in noise was evaluated with the Bamford-Kowell-Bench Speech in Noise (BKB-SIN test,  Etymotic Research, 2005), which contains sentences spoken by a male talker, masked by 4-talker babble. The test contains lists of 10 sentences with 31 scoring words. In each list, the signal-to-noise ratio (SNR) decreases by 3dB with each sentence, so that within any list the SNR ranges from +21dB to -6dB. Sentences were presented at 73dB, a “loud but OK” level, as recommended for this test.

Following the speech recognition testing, participants were trained in the field trial procedures for subjective ratings. They were asked to evaluate their ability to understand speech in everyday situations with the NAL and LP programs and identify occasions during which they felt they could understand some but not all of the words they were hearing. Participants were given booklets with daily rating sheets and listening checklists to record daily hours of hearing aid use and track the variety of their daily listening experiences.

After a two week field trial, participants returned to the laboratory for a second session of CASPA and BKB-SIN testing. They submitted ratings sheets and listening checklists and were interviewed about their preferred hearing aid program for everyday listening. The interview consisted of questions that covered program preferences related to:  understanding speech in quiet, understanding speech in noise, hearing over long distances, the sound of their own voice, sound quality, loudness, localization, the least tiring program and the one that provided the most comfortable sound. Participants were asked to indicate their preferred program for each of these criteria, as well as their preferred program for overall everyday use. They were asked to provide three reasons for overall preference.

Speech recognition testing in quiet revealed no difference in overall performance between the two groups, but there was a significant difference based on the hearing aid program that was used. Listeners from both the experimental group and the control group performed better with the broadband NAL program, though the difference between the NAL and LP programs was larger for the control group than the experimental group. This indicates that the individuals without DRs were able to derive more benefit from the additional high frequency information in the NAL program than the individuals with DRs did.

Speech recognition testing in noise revealed a similar finding but in this case the improvement with the NAL program was only significant for the control group. Although the experimental group’s mean scores with the NAL program were higher than those with the LP program, the difference did not reach statistical significance.  Because the BKB-SIN test used variable SNR levels, performance-intensity functions were constructed with scores obtained using the NAL and LP programs. These functions revealed that at any given SNR, speech was more intelligible with the NAL program. However, there was more of a difference between the NAL and LP functions for the control group than the experimental group, consistent with a program by group statistical interaction.

Subjective ratings of speech understanding revealed no significant difference between the experimental and control groups, but there was a significant difference based on program.  Participants from the control and experimental groups rated their performance better with the NAL program.

Interviews concerning program preference revealed that 23 participants preferred the NAL program and 11 preferred the LP program. There was no association with the presence of DRs. When the reasons supporting the participants’ preferences were analyzed, the most frequently mentioned reason for NAL preference was greater speech clarity. The most common reason for LP preference was that the other program (NAL) was too loud.

This investigation by Dr. Cox and her colleagues indicates that high-frequency amplification was beneficial to participants with single or multiple DRs, especially for speech recognition in quiet. In noise, participants with DRs still performed better with the NAL program, though the improvement was not as marked as it was for those without DRs. In field trials, participants with DRs reported more improvement with the NAL program than the control group did, indicating that perceived benefits in everyday situations exceeded any predictions of the laboratory results. At no point in the study did high-frequency amplification reduce performance for individuals with or without DRs.

This finding is in contrast with previous reports (Vinay & Moore, 2007a; Gordo & Iorio, 2007). Cox and her colleagues note that most of the participants in their study had only one or two DRs as opposed to several contiguous DRs. They allow that their findings might not relate to the performance of participants with several contiguous DRs, but point out that among typical hearing aid candidates, it is unlikely for individuals to have more than one or two DRs. With this consideration, the authors suggest that, high frequency amplification should not be reduced, even in cases with identified dead regions.

This study from the University of Memphis provides a recommendation for use of prescribed settings and against reduction of high frequency gain for hearing aid users with one or two DRs.  They found beneficial effects of high frequency amplification in laboratory and everyday environments and noted no circumstances in which listeners demonstrated deleterious effects of high frequency amplification. These results may not pertain to individuals with several contiguous DRs but they are pertinent to the majority of typical hearing aid wearers. Their findings also support the use of subjective performance measures, as these provided additional information that was sometimes in contrast to the laboratory results. They point out that laboratory results do not always predict performance in everyday life and it can be extrapolated that clinical measures of efficacy should always be supported with subjective reports of effectiveness, like self-assessment of comfort and acceptance.

References

Baer,  T., Moore, B.C. & Kluk, K. (2002). Effects of low pass filtering on the intelligibility of speecdh in noise for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America 112(3 pt. 1), 1133-1144.

Ching, T.Y., Dillon, H. & Byrne, D. (1998). Speech recognition of hearing-impaired listeners: predictions from audibility and the limited role of high-frequency amplification. Journal of the Acoustical Society of America 103, 1128-1140.

Cox, R. M., Alexander, G.C., & Johnson, J.A. (2011). Cochlear dead regions in typical hearing aid candidates: prevalence and implications for use of high-frequency speech cues. Ear and Hearing 32, 339-348.

Cox, R.M., Johnson, J.A. & Alexander, G.C. (2012). Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss. Ear and Hearing, May 2012, e-published ahead of print.

Etymotic Research (2005). BKB-SIN Speech in Noise Test, Version 1.03. Elk Grove Village, IL: Etymotic Research.

Moore, B.C., Glasberg, B. & Vickers, D.A. (1999b). Further evaluation of a model of loudness perception applied to cochlear hearing loss. Journal of the Acoustical Society of America 106, 898-907.

Moore, B.C., Huss, M. & Vickers, D.A. (2000). A test for the diagnosis of dead regions in the cochlea. British Journal of Audiology 34, 205-224.

Moore, B.C., Glasberg, B. R. & Stone, M.A. (2004). New version of the TEN test with calibrations in dB HL. Ear and Hearing 25, 478-487.

Gordo, A. & Iorio, M.C. (2007). Dead regions in the cochlea at high frequencies: Implications for the adaptation to hearing aids. Revista Brasileira de Otorrinolaringologia 73, 299-307.

Hogan, C.A. & Turner, C.W. (1998). High frequency audibility: benefits for hearing-impaired listeners. Journal of the Acoustical Society of America 104, 432-441.

Mackersie, C.L., Boothroyd, A. & Minniear, D. (2001). Evaluation of the Computer-Assisted-Speech Perception Assessment Test (CASPA).  Journal of the American Academy of Audiology 12, 390-396.

Mackersie, C.L., Crocker, T.L. & Davis, R.A. (2004). Limiting high-frequency hearing aid gain in listeners with and without suspected cochlear dead regions. Journal of the American Academy of Audiology 15, 498-507.

Moore, B.C. (2001a). Dead regions in the cochlear: Diagnosis, perceptual consequences and implications for the fitting of hearing aids. Trends in Amplification 5, 1-34.

Moore, B.C., (2001b). Dead regions in the cochlear: Implications for the choice of high-frequency amplification. In R.C. Seewald & J.S. Gravel (Eds). A Sound Foundation Through Early Amplification, p 153-166. Stafa, Switzerland: Phonak AG.

Padilha, C., Garcia, M.V., & Costa, M.J. (2007).  Diagnosing cochlear dead regions and its importance in the auditory rehabilitation process. Brazilian Journal of Otolaryngology 73, 556-561.

Turner, C.W. (1999). The limits of high-frequency amplification. Hearing Journal 52, 10-14.

Turner, C.W. & Cummings, K.J. (1999). Speech audibility for listeners with high-frequency hearing loss. American Journal of Audiology 8, 47-56.

Vickers, D.A., Moore, B.C. & Baer, T. (2001). Effects of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America 110, 1164-1175.

Vinay, B.T. & Moore, B.C. (2007a). Prevalence of dead regions in subjects with sensorineural hearing loss. Ear and Hearing 28, 231-241.

Do Patients with Severe Hearing Loss Benefit from Directional Microphones?

Ricketts, T.A., & Hornsby, B.W.Y. (2006). Directional hearing aid benefit in listeners with severe hearing loss. International Journal of Audiology, 45, 190-197.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Hearing Technologies. This editorial does not represent the opinions of the original authors.

The benefit of directional microphones for speech recognition in noise is well established for individuals with mild to moderate hearing loss (Madison & Hawkins, 1983; Killion et al., 1998; Ricketts 2000a; Ricketts & Henry, 2002).  The potential benefit of directional microphones for severely hearing-impaired individuals is less understood and few studies have examined directional benefit when hearing loss is greater than 65dB.

Killion and Christensen (1998) proposed that listeners with severe-to-profound hearing loss may experience reduced directional benefit because they are less able to make use of speech information across frequencies. Ricketts, Henry and Hornsby confirmed this hypothesis in a 2005 study. They found an approximately 7% increase in speech recognition score per 1dB increase in directivity for listeners with moderate hearing loss, whereas listeners with severe loss achieved only an approximately 3.5% increase per 1dB increase in directivity.

In their 2005 study, Ricketts and Hornsby used individually determined SNRs and auditory-visual stimuli that allowed testing at poorer SNRs without floor effects. The authors point out that visual cues usually offer a greater benefit at poor SNRs, especially for sentence materials (Erber, 1969; Sumby & Pollack, 1954; MacLeod & Summerfield, 1987).  Individuals rely more on visual cues in poorer SNRs, visual information that provides complementary, non-redundant cues is most beneficial (Grant, 1998; Walden et al., 1993).

The purpose of their study was to examine potential directional benefit for severely hearing-impaired listeners at multiple SNRs in auditory-only and auditory-visual conditions. Directional and omnidirectional performance in quiet conditions were also tested to rule out performance differences between microphone modes that could be attributed to reduction of environmental noise by the directional microphone. Finally, it was determined if performance in quiet conditions would significantly exceed performance in highly positive SNRs. Though significant improvement in SNRs more favorable than +15 dB is usually not expected, some research suggests that hearing-impaired individuals may experience additional benefit from more favorable SNRs (Studebaker et al, 1999).

Twenty adult participants with severe-to-profound sensorineural hearing loss participated in the study. All participants used oral communication, had at least nine years of experience with hearing aids and had pure tone average hearing thresholds greater than 65dB.  Participants were fitted with power behind-the-ear hearing aids with full shell, unvented earmolds. Digital noise reduction and feedback management was turned off. The directional program was equalized, so that gain matched the omnidirectional mode as closely as possible.

The Audio/Visual Connected Speech Test (CST; Cox et al, 1987), a speech recognition test with paired passages of connected speech, was presented to listeners on DVD. Speech was presented at a 0-degree azimuth angle and uncorrelated competing noise was presented via five loudspeakers surrounding the listener. Testing took place in a sound booth with reflective panels to approximate common levels of reverberation in everyday situations.

Baseline SNRs were obtained for each subject in auditory-only and auditory-visual conditions, at a level that was near, but not reaching floor performance. Speech recognition testing was conducted for omnidirectional and directional conditions at baseline SNR, baseline + 4dB and baseline + 8dB. Presentation SNRs ranged from 0dB to +24dB for auditory-only conditions and from -6dB to +18dB for auditory-visual conditions. Listeners were tested with auditory-only stimuli in quiet conditions, for omnidirectional and directional modes. Testing in quiet was not performed with auditory-visual stimuli, as performance was expected to approach ceiling performance levels.

The multiple SNR levels were achieved with two different methodologies. Half of the participants listened to a fixed noise level of 60dB SPL and speech levels were varied to achieve the desired SNRs. The remaining participants listened to a fixed speech level of 67dB SPL and the noise levels were adjusted to reach the desired SNR levels. Data analysis revealed no significant differences between these two test methodologies for any of the variables, so their data was pooled for subsequent analyses.

The results showed significant main effects for microphone mode (directional versus omni), SNR and presentation condition (auditory-only versus auditory-visual). There were significant interactions between microphone mode and SNR, as well as between presentation condition and SNR.  Each increase in SNR resulted in significantly better performance for both omnidirectional and directional modes. Performance in directional mode was significantly better than omnidirectional for all SNR levels. The authors pointed out that auditory-visual performance at all three SNRs was always better than auditory-only, despite the fact that the absolute SNRs for auditory-visual conditions were lower than the equivalent auditory-only conditions, by an average of 5dB.  The authors interpreted this finding as strong support for the benefit of visual cues for speech recognition in adverse conditions.

When the effects of directionality and SNR were analyzed separately for auditory-only and auditory-visual conditions, they found that directional performance was significantly better than omnidirectional performance for all auditory-visual conditions. In auditory-only conditions, directionality only had a significant effect at the baseline SNR, but not in the baseline +4dB, baseline +8dB or quiet conditions.

Perhaps not surprisingly, Ricketts and his colleagues found that the addition of visual cues offered their severely-impaired listeners a significant advantage for understanding connected speech. When they compared the auditory-only and auditory-visual scores at equivalent SNR levels, they determined that participants achieved an average improvement of 22% with the availability of visual cues. This finding is in agreement with previous research that found a visual advantage of 24% for listeners with moderate hearing loss (Henry & Ricketts, 2003).

Also not surprisingly, performance improved with increases in signal to noise ratio.  For the auditory-only condition, they found an average improvement of 1.6% per dB and 2.7% per dB for the omnidirectional and directional modes, respectively. For the auditory-visual condition, there was an improvement of 3.7% per dB for omnidirectional mode and 3.1% per dB for directional mode.  Furthermore, they found an additional performance increase of 8% for directional mode and 12% for omnidirectional mode when participants were tested in quiet conditions. This was somewhat surprising given previous research based on the articulation index (AI) that suggested maximal performance could be expected at SNRs of approximately +15dB.  The absolute SNR for the baseline +8dB condition was 14.7dB, so further improvements in quiet conditions support the suggestion that hearing-impaired listeners experience increased improvement for SNRs up to +20dB (Studebaker et al, 1999; Sherbecoe & Studebaker, 2002).

The benefit of visual cues was not specifically addressed by this study because it did not compare auditory-only and auditory-visual performance at the same SNR levels. However, the discovery that visual cues improved performance even when the SNRs were approximately 5dB poorer was strong support for the benefit of visual information for speech recognition in noisy environments. This underscores the recommendation that severely hearing-impaired listeners should always be counseled to take advantage of visual cues whenever possible, especially in adverse listening conditions. Although visual cues cannot completely counterbalance the auditory cues lost to hearing loss and competing noise, they supply additional information that can help the listener identify or differentiate phonemes, especially in connected speech containing semantic and syntactic context. In conversational situations, visual cues include not just lip-reading but also the speaker’s gestures, expressions and body language. All of these cues can aid speech recognition, so hearing-impaired individuals as well as their family members should be trained in strategies to maximize the availability of visual information.

Ricketts and Hornsby’s study supports the potential benefit of directional microphones for individuals with severe hearing loss. Many hearing aid users with severe-to-profound loss have become accustomed to the use of omnidirectional microphones and may be resistant to directional microphones, especially automatic directionality, if it is in the primary program of their hearing instruments. One strategy for addressing these cases is to program the hearing aid’s primary memory as full-time omnidirectional while programming a second, manually accessed, memory with a full-time directional microphone. This way the listener is able to choose when and how they use their directional program and may be less likely to experience unexpected and potentially disconcerting changes in perceived loudness and sound quality.

In addition to providing evidence for the benefit of visual cues and directionality, the findings of this study can be extrapolated to support the use of FM and wireless accessories. The fact that performance in quiet conditions was still significantly better than the next most favorable SNR (14.7dB) shows that improving SNR as much as possible provides demonstrable advantages for listeners with severe hearing loss. Even for individuals who do well with their hearing instruments overall, wireless accessories that stream audio directly to the listeners hearing instruments may further improve understanding. These accessories improve SNR by reducing the effect of room acoustics and reverberation, as well as reducing the effect of competing noise and distance between the sound source and the listener. Most modern hearing instruments are compatible with wireless accessories so hearing aid evaluations should always include discussion of their potential benefits. These devices work with a wide range of hearing aid styles, do not require the use of an adapter or receiver boot and are much less expensive than an FM system.

Ricketts and Hornsby’s study underscores the importance of visual information and directionality for speech recognition in noisy environments and illuminates ways in which clinicians can help patients with severe-to-profound loss achieve improved communication in everyday circumstances. Modern technologies such as directional processing and wireless audio streaming accessories can be effective tools for improving SNRs in everyday situations that may otherwise challenge or overwhelm the listener with severe to profound hearing loss.

References

Erber, N.P. (1969). Interaction of audition and vision in the reception of oral speech stimuli. Journal of Speech and Hearing Research 12, 423-425.

Grant, K.W., Walden, B.E. & Seitz, P.F. (1998). Auditory-visual speech recognition by hearing-impaired subjects: consonant recognition, sentence recognition and auditory-visual integration. Journal of the Acoustical Society of America 103, 2677-2690.

Henry, P. & Ricketts, T. A. (2003).  The effect of head angle on auditory and visual input for directional and omnidirectional hearing aids. American Journal of Audiology 12(1), 41-51.

Killion, M. C., Schulien, R., Christensen, L., Fabry, D. & Revit, L. (1998). Real world performance of an ITE directional microphone. Hearing Journal 51(4), 24-38.

Killion, M.C. & Christensen, L. (1998). The case of the missing dots: AI and SNR loss. Hearing Journal 51(5), 32-47.

MacLeod, A. & Summerfield, Q. (1987). Quantifying the contribution of vision to speech perception in noise. British Journal of Audiology 21, 131-141.

Madison, T.K. & Hawkins, D.B. (1983). The signal-to-noise ratio advantage of directional microphones. Hearing Instruments 34(2), 18, 49.

Pavlovic, C. (1984). Use of the articulation index for assessing residual auditory function in listeners with sensorineural hearing impairment. Journal of the Acoustical Society of America 75, 1253-1258.

Pavlovic, C., Studebaker, G. & Scherbecoe, R. (1986). An articulation index based procedure for predicting the speech recognition performance of hearing-impaired individuals. Journal of the Acoustical Society of America 80(1), 50-57.

Ricketts, T.A. (2000a). Impact of noise source configuration on dire3ctional hearing aid benefit and performance. Ear and Hearing 21(3), 194-205.

Ricketts, T.A. & Henry, P. (2002). Evaluation of an adaptive directional microphone hearing aid. International Journal of Audiology 41(2), 100-112.

Ricketts, T., Henry, P. & Hornsby, B. (2005). Application of frequency importance functions to directivity for prediction of benefit in uniform fields. Ear & Hearing 26(5), 473-86.

Studebaker, G., Sherbecoe, R., McDaniel, D. & Gwaltney, C. (1999). Monosyllabic word recognition at higher-than-normal speech and noise levels. Journal of the Acoustical Society of America 105(4), 2431-2444.

Sherbecoe, R.L., Studebaker, G.A. (2002). Audibility-index functions for the Connected Speech Test. Ear & Hearing 23(5), 385-398.

Sumby, W.H. & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26, 212-215.

Walden, B.E., Busacco, D.A. & Montgomery, A.A. (1993). Benefit from visual cues in auditory-visual speech recognition by middle-aged and elderly persons. Journal of Speech and Hearing Research 36, 431-436.

Transitioning the Patient with Severe Hearing Loss to New Hearing Aids

Convery, E., & Keidser, G. (2011). Transitioning hearing aid users with severe and profound loss to a new gain/frequency response: benefit, perception and acceptance. Journal of the American Academy of Audiology. 22, 168-180.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Hearing Technologies. This editorial does not represent the opinions of the original authors. 

Many individuals with severe-to-profound hearing loss are full-time, long-term hearing aid users. Because they rely heavily on their hearing aids for everyday communication, they are often reluctant to try new technology. It is common to see patients with severe hearing loss keep a set of hearing aids longer than those with mild-to-moderate losses. These older hearing aids offered less effective feedback suppression and a narrower frequency range than those available today now. The result was that many severely-impaired hearing aid users were fitted with inadequate high-frequency gain and compensatory increases in low-mid frequency amplification.  Having adapted to this frequency response, they may reject new hearing aids with increased high-frequency gain, stating that they sound too tinny or unnatural. Similarly, those who have adjusted to linear amplification may reject wide-dynamic-range compression (WDRC) as too soft, even though it the strategy may provide some benefits when compared to their linear hearing aids.

Convery and Keidser evaluated a method to gradually transition experienced, severely impaired hearing aid users into new amplification characteristics. They measured subjective and objective outcomes as subjects took incremental steps toward a more appropriate frequency response. Twenty-three experienced, adult hearing aid users participated in the study.   Participation was limited to subjects whose current gain and frequency response differed significantly from targets based on NAL-RP, a modification of the NAL formula for severe to profound hearing losses (Byrne, et al 1991).  Most subjects’ own instruments had more gain at 250-2000Hz and less gain at 6-8 kHz compared to NAL-RP targets, so the experimental transition involved adapting to less low and mid-frequency gain and more high frequency gain.

Subjects in the experimental group were fitted bilaterally with WDRC behind-the-ear hearing instruments. Directional microphones, noise reduction and automatic features were turned off and volume controls were activated with an 8dB range. The hearing aids had two programs: the first, called the “mimic” program,  had a gain/frequency response adjusted to match the subject’s current hearing aids. The second program was set to NAL-RP targets.  MPO was the same for mimic and NAL-RP programs. The programs were not manually accessible for the user, they were only adjusted by the experimenters at test sessions.

Four incremental programs were created for each participant in the experimental group. Each step was approximately a 25% progression from their mimic program frequency response to the NAL-RP prescribed response. At 3 week intervals, they were switched to the next incremental program, approaching NAL-RP settings as the experiment progressed.  The programs in the control group’s hearing aids remained consistent for the duration of the study.

All subjects attended 8 sessions. At the initial session, subjects’ own instruments were measured in a 2cc coupler and RECD measurements were obtained with their own earmolds. The experimental hearing aids were fitted at the next session and subjects returned for follow-up sessions at 1 week post-fitting and 3 week intervals thereafter until 15-weeks post-fitting.

Subjects evaluated the mimic and NAL-RP programs in paired comparisons at 1 week and 15 weeks post-fitting. The task used live dialogues with female talkers in four everyday environments: café, office, reverberant stairwell and outdoors with traffic noise in the background. Hearing aid settings were switched from mimic to NAL-RP with a remote control, without audible program change beeps, so subjects were unaware of their current program. They were asked to indicate their preference for one program over the other on a 4-point scale: no difference, slightly better, moderately better or much better.

Speech discrimination was evaluated with the Beautifully Efficient Speech Test (BEST; Schmitt, 2004) which measured the aided SRT for sentence stimuli. Loudness scaling was then conducted to determine the most comfortable loudness level and range (MCL/R).  Finally, subjects responded to a questionnaire concerning overall loudness comfort, speech intelligibility, sound quality, use of the volume control, use of their own hearing aids and perceived changes in audibility and comfort.  Speech discrimination, loudness scaling and questionnaire administration took place for all participants at 3 week intervals, starting at the 3 week post-fitting session.

One goal of the study was to determine if there would be a change in speech discrimination over time or a difference between the experimental and control groups. Analysis of BEST SRT scores yielded no significant difference between the experimental and control groups, nor was there a significant change in SRT over time. There was a significant interaction between these variables, indicating that the experimental group demonstrated slightly poorer SRT scores over time, whereas the control group’s SRTs improved slightly over time.

Subjects rated perceptual disturbance, or how much the hearing aid settings in the current test period differed from the previous period and how disturbing the difference was. There was no significant effect for the experimental or control groups, but there was a tendency for reports of perceptual disturbance over time to decrease for the control group and increase for the experimental group. The mimic programs for the control group were consistent, so control subjects likely became acclimated over time. The experimental group, however, had incremental changes to their mimic program at each session, so it is not surprising that they reported more perceptual disturbance. This was only a slight trend, however, indicating that even the experimental group experienced relatively little disturbance as their hearing aids  approached NAL-RP targets.

Analysis of the paired comparison responses indicated a significant overall preference for the mimic program over the NAL-RP program. There was an interaction between environment and listening program, showing a strong preference for the mimic program in office and outdoor environments and somewhat less of a preference in the café and stairwell environments. When asked about their criteria for the comparisons, subjects most commonly cited speech clarity, loudness comfort and naturalness, regardless of whether mimic fit or NAL-RP was preferred.  There was no significant effect of time on program preference, but there was a slight increase in the control group’s preference for mimic at the end of the study, whereas the experimental group shifted slightly toward NAL-RP, away from mimic.

Over the course of the study, Convery and Keidser’s subjects demonstrated acceptance of new frequency responses with less low- to mid-frequency gain and more high frequency gain than their current hearing aids. No significant differences were noted between experimental and control groups for loudness, sound quality, voice quality, intelligibility or overall performance, nor did these variables change significantly over time. Though all subjects preferred the mimic program overall, there was a trend for the experimental group to shift slightly toward a preference for the NAL-RP settings, whereas the control group did not. This indicates that the experimental subjects had begun to acclimate to the new, more appropriate frequency response. Acclimatization might have continued to progress, had the study examined performance over a longer period of time. Prior research indicates that acclimatization to new hearing aids can progress over the course of several months and individuals with moderate and severe losses may require more time to adjust than individuals with milder losses (Keidser et al, 2008).

Reports of perceptual disturbance increased as incremental programs approached NAL-RP settings. This may not be surprising to clinicians, as hearing aid patients often require a period of acclimatization even after relatively minor changes to their hearing aid settings. Furthermore, clinical observation supports the suggestion that individuals with severe hearing loss may be even more sensitive to small changes in their frequency response. Allowing more than three weeks between program changes may result in less perceptual disturbance and easier transition to the new frequency response. Clinically, perceptual disturbance with a new frequency response can also be mitigated by counseling and encouraging patients that they will feel more comfortable with the new hearing aids as they progress through their trial periods.  It might also be helpful to extend the trial period (which is usually 30-45 days) for individuals with severe to profound hearing losses, to accommodate an extended acclimatization period.

Individuals with severe-to-profound hearing loss often hesitate to try new hearing aids.  Similarly, audiologists may be reluctant to recommend new instruments with WDRC or advanced features for fear that they will be summarily rejected. Convery and Keidser’s results support a process for transitioning experienced hearing aid users into new technology and suggest an alternative for clinicians who might otherwise hesitate to attempt departures from a patient’s current frequency response.

Because this was a double-blind study, the research audiologists were unable to counsel subjects as they would in a typical clinical situation.  The authors note that counseling during transition is of particular importance for severely impaired hearing aid users, to ensure realistic expectations and acceptance of the new technology. Though the initial fitting may approximate the client’s old frequency response, follow-up visits at regular intervals should slowly implement a more desirable frequency response.  Periodically, speech discrimination and subjective responses should be evaluated and the transition should be stopped or slowed if decreases in intelligibility or perceptual disturbances are noted.

In addition to changes in the frequency response, switching to new hearing aid technology usually means the availability of unfamiliar features such as directional microphones, noise reduction and many wireless features. Special features such as these can be introduced after the client acclimates to the new frequency response, or they can be relegated to alternate programs to be used on an experimental basis by the client. For instance, automatic directional microphones are sometimes not well-received by individuals who have years of experience with omnidirectional hearing aids. By offering directionality in an alternate program, the individual can test it out as needed and may be less likely to reject the feature or the hearing aids.  It is critical to discuss proper use of the programs and to set up realistic expectations.  Because variable factors such as frequency resolution and sensitivity to incremental amplification changes may affect performance and acceptance, the transition period should be tailored to the needs of the individual and monitored closely with regular follow-up appointments.

References

Baer, T., Moore, B.C.J. & Kluk, K. (2002).  Effects of low pass filtering on the intelligibility of speech in noise for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America. 112, 1133-1144.

Barker, C., Dillon, H. & Newall, P. (2001). Fitting low ratio compression to people with severe and profound hearing losses. Ear and Hearing. 22, 130-141.

Byrne, D., Parkinson, A. & Newall, P. (1991).  Modified hearing aid selection procedures for severe/profound hearing losses. In: Studebaker, G.A. , Bess, F.H., Beck, L. eds. The Vanderbilt Hearing Aid Report II. Parkton, MD: York Press, 295-300.

Ching, T.Y.C., Dillon, H., Lockhart, F., vanWanrooy, E. & Carter, L. (2005). Are hearing thresholds enough for prescribing hearing aids? Poster presented at the 17th Annual American Academy of Audiology Convention and Exposition, Washington, DC.

Convery, E. & Keidser, G. (2011). Transitioning hearing aid users with severe and profound loss to a new gain/frequency response: benefit, perception and acceptance. Journal of the American Academy of Audiology. 22, 168-180.

Flynn, M.C., Davis, P.B. & Pogash, R. (2004). Multiple-channel non-linear power hearing instruments for children with severe hearing impairment: long-term follow-up. International Journal of Audiology. 43, 479-485.

Keidser, G., Hartley, D. & Carter, L. (2008). Long-term usage of modern signal processing by listeners with severe or profound hearing loss: a retrospective survey. American Journal of Audiology. 17, 136-146.

Keidser, G., O’Brien, A., Carter, L., McLelland, M., and Yeend, I. (2008) Variation in preferred gain with experience for hearing-aid users. International Journal of Audiology. 47(10), 621-635.

Kuhnel, V., Margolf-Hackl, S. & Kiessling, J. (2001).  Multi-microphone technology for severe to profound hearing loss. Scandanavian Audiology 30 (Suppl. 52), 65-68.

Moore, B.C.J. (2001). Dead regions in the cochlea: diagnosis, perceptual consequences and implications for the fitting of hearing aids. Trends in Amplification. 5, 1-34.

Moore, B.C.J., Killen, T. & Munro, K.J. (2003). Application of the TEN test to hearing-impaired teenagers with severe-to-profound hearing loss. International Journal of Audiology. 42, 465-474.

Schmitt, N. (2004). A New Speech Test (BEST Test). Practical Training Report. Sydney: National Acoustic Laboratories.

Vickers, D.A., Moore, B.C.J. & Baer, T. (2001). Effect of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America. 110, 1164-1175.