Starkey Research & Clinical Blog

The Tinnitus Functional Index (TFI): A New and Improved way to Evaluate Tinnitus

Meikle, M.B., Henry, J.A., Griest, S.E., Stewart, B.J., Abrams, H.B., McArdle, R., Myers, P.J., Newman, C.W., Sandridge, S., Turk, D.C., Folmer, R.L., Frederick, E.J., House, J.W., Jacobson, G.P., Kinney, S.E., Martin, W.H., Nagler, S.M., Reich, G.E., Searchfield, G., Sweetow, R. & Vernon, J.A. (2012). The Tinnitus Functional Index:  Development of a new clinical measure for chronic, intrusive tinnitus. Ear & Hearing 33(2), 153-176.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

The practice of clinical audiology can arguably be described as having two primary goals: the diagnosis of auditory and vestibular disorders, followed by verifiable, effective treatment and rehabilitation. There are well established, objective diagnostic tests for hearing and vestibular disorders and their treatment methods can be verified with objective and subjective tools. The evaluation and treatment of tinnitus, though equally important, is more complicated. There are test protocols for matching perceived tinnitus characteristics, but the impact of tinnitus on the individual must be measured subjectively with self-assessment questionnaires.  There are several published questionnaires to evaluate tinnitus severity and the impact it has on an individual’s activities, emotions and relationships. However, most of these questionnaires were not designed specifically to measure the effect of tinnitus treatments (Kamalski et al., 2010), so their value as follow-up measures is unknown.

Tinnitus affects as many as 50 million Americans and can have disabling effects including: sleep interference, difficulty concentrating and attending, anxiety, frustration and depression (for review see Tyler & Baker, 1983; Stouffer & Tyler, 1990; Axelsson, 1992; Meikle 1992; Dobie, 2004b). There are numerous methods of treatment available, including hearing aids, tinnitus maskers, tinnitus retraining therapy, biofeedback, counseling and others. Because there is currently no standard assessment tool to evaluate tinnitus treatment outcomes, the effectiveness of tinnitus treatment methods is difficult to verify and compare. The Tinnitus Functional Index (TFI) was developed as a collaborative effort among researchers and clinicians to develop a validated, standard questionnaire that can be used clinically for intake assessments and follow-up measurements and in the laboratory as a way of comparing treatment efficacy and identifying subject characteristics.

The developers of the TFI aimed for this inventory to be used in three ways:

1. As an intake evaluation tool to identify individual differences in tinnitus patients.
2. As a reliable and valid measurement of multiple domains of tinnitus severity.
3. As an outcome measure to assess treatment-related change in tinnitus.

The study, supported by a grant from the Tinnitus Research Consortium (TRC), had three stages. The first stage involved consultation with 21 tinnitus experts, including audiologists, otologists and hearing researchers. The panel of experts evaluated 175 items from nine previously published tinnitus questionnaires and judged them based on their relevance to 10 tinnitus negative impact domains as well as their expected responsiveness, or ability to measure treatment-related improvement. After analyzing the content validity, relevance and potential responsiveness of the 175 items (Haynes et al., 1995), 43 items were selected for the first questionnaire prototype. The TRC initially required that 10 domains of negative tinnitus impact be covered by the TFI but this expert panel added three additional domains so that the first prototype of the TFI covered 13 domains of tinnitus impact. The TRC also recommended avoiding overly negative items (such as those referring to suicidal thoughts or feeling victimized or helpless), items referring to hearing loss without mentioning tinnitus and items referring to more than one subtopic. Each domain had at least three or four items, based on recommendations for achieving adequate reliability (Fabrigar et al., 1999; Moran et al., 2001).  Each questionnaire item probed respondents for a rating on a scale of 0 to 10, based on how they experienced their tinnitus “over the past week”. For example, a typical question read, “Over the past week, how easy was it for you to cope with your tinnitus?” with potential responses from 0 being “very easy to cope” and 10 being “impossible to cope”.

During the second stage of the study, TFI Prototype 1 was tested on 326 tinnitus patients at five independent clinical sites.  The goals for the second stage were to determine the responsiveness of items or their ability to reflect changes in tinnitus status, to evaluate the 13 tinnitus impact domains and to determine the TFI’s ability to scale tinnitus severity. The questionnaire was administered at the initial intake assessment, after 3 months and after 6 months.  In addition to completing the TFI, at the initial encounter the subjects completed a brief tinnitus history questionnaire, The Tinnitus Handicap Inventory (THI; Newman et al., 1996) and the Beck Depression Inventory-Primary Care (BDEI-PC; Beck et al., 1997).  The TFI was re-administered to 65 subjects after 3 months and again to 42 subjects after 6 months.

The researchers found that subjects had very few problems responding to the 43 selected items and that most questionnaires were fully completed. There were no floor or ceiling effects, indicating that there were no items for which responses clustered at either end of the scale, reducing the potential responsiveness of the item.  The TFI had very high convergent validity, which means it agreed well with other published scales of tinnitus severity, such as the THI.  There were large effect sizes, demonstrating that the Prototype 1 items had good responsiveness for treatment-related change and supporting use of the TFI as an outcome measure. Factor analysis of the 13 initial tinnitus impact domains yielded 8 clearly structured domains, which were retained for the second prototype.

The third stage of the study involved development and evaluation of TFI Prototype 2, which was modified based on validity and reliability measurements from the first prototype. Prototype 2 included the 30 best-functioning items from the first version, categorized according to 8 tinnitus impact domains. It was administered to 347 new participants at the initial assessment. Follow-up data were obtained from 155 participants after 3 months and from 85 participants after 6-months. Encouragingly, the results from clinical evaluation of Prototype 2 again showed good performance for all of the validity and reliability measurements, supporting its use for scaling tinnitus severity.

The best performing items from Prototype 2 were used to create the final version of the TFI, which contains 25 items in 8 domains or sub-scales: Intrusive, Sense of Control, Cognitive, Sleep, Auditory, Relaxation, Quality of Life and Emotional. Seven of the domains contain 3 items and the Quality of Life domain contains 4 items.

When used during the initial assessment, the TFI categorizes tinnitus severity according to five levels: not a problem, a small problem, a moderate problem, a big problem or a very big problem.  As a screening tool, this allows a clinician to determine the overall severity of the tinnitus to help formulate a treatment plan and consider whether referrals to other clinical professionals are needed. For example, an individual who scores in the “not a problem” level may require only brief reassurance and counseling and may be asked to follow-up only if symptoms progress. In contrast, an individual who scores in the “big problem” or “very big problem” categories will likely need referrals for additional diagnostic and therapeutic services right away.

The developers of the TFI acknowledge that their study is preliminary and more research is needed to determine the TFI’s value as an outcome measurement tool. However, based on their analyses they recommend that a change in TFI score of 13 should be considered meaningful. In other words, a decrease of 13 or more indicates an improvement based on treatment recommendations or an increase in 13 or more indicates a significant exacerbation of symptoms.

Most tinnitus self-report questionnaires were designed to assess tinnitus impact but do not specifically measure treatment outcomes. The Tinnitus Handicap Inventory (THI; Newman et al., 1996), however, has shown some promise as an initial evaluation tool and as a way to measure treatment outcome.  After formulation of the final version of the TFI, the effect sizes of the TFI were compared to the THI. Overall, the TFI had greater responsiveness, indicating that it could potentially yield statistically significant differences with fewer subjects than the THI would require. Evaluation of subs-scale domains yielded some differences between the TFI and THI, primarily related to the “Catastrophic” subscale of the THI. Most of these items were not included in the TFI, based on the TRC’s recommendations to avoid questions dealing with negative ideation. The TRC recommended against inclusion of items relating to despair inability to escape tinnitus and fear of having a terrible disease, because they may suggest to people with mild tinnitus that they will eventually have these concerns, creating feelings of negativity before treatment has started.  Because these items on the THI correlated only moderately with the more neutrally worded items on the TFI, the authors suggested that the THI Catastrophic subscale might measure a different severity domain than the TFI and may be useful in combination with the THI as an outcome measure.

The Tinnitus Functional Index (TFI), like other previously published tinnitus questionnaires, shows promise as a tool to measure and classify tinnitus severity. It is easy for respondents to understand the test items and can be administered briefly at or prior to the initial appointment. An additional benefit of the TFI appears to be its validity as an outcome measure of treatment effectiveness. This is critically important for guiding clinical decisions and modifying ongoing treatment plans. It also suggests that the TFI could be useful in laboratory research as a standardized way to evaluate and compare tinnitus treatment methods or to identify subject characteristics for inclusion in treatment groups. For instance, if a treatment is expected to affect the negative emotional impact of tinnitus more than the functional impact, the TFI could be useful in identifying appropriate subject candidates who are experiencing strong emotional reactions to their tinnitus. The Tinnitus Functional Index (TFI) is one of the most systematically validated methods of assessing a patient’s reaction to their tinnitus. Ease of application and interpretation place the TFI among the most compelling assessment options for clinicians working with tinnitus patients.

If you would like to use the TFI. It is now available on a website posted by Oregon Health & Science University (OHSU). OHSU owns the copyright to the TFI and permission is required by OHSU to use the TFI. The request form takes 3 minutes to complete and allows you access to the TFI form and instructions. You will then be able to use the TFI with no fee.

http://www.ohsu.edu/xd/health/services/ent/services/tinnitus-clinic/tinnitus-functional-index.cfm

References

Axelsson, A. (1992). Conclusion to Panel Discussion on Evaluation of Tinnitus Treatments. In J.M. Aran & R. Dauman (Eds) Tinnitus 91. Proceedings of the Fourth International Tinnitus Seminar (pp. 453-455). New York, NY: Kugler Publications.

Beck, A.T., Guth, D. & Steer, R.A. (1997). Screening for major depression disorders in medical in patients with the Beck Depression Inventory for Primary Care. Behavioral Research and Therapy 35, 785-791.

Dobie, R.A. (2004b). Overview: Suffering From Tinnitus. In J.B. Snow (Ed) Tinnitus: Theory and Management (pp.1-7). Lewiston, NY: BC Decker Inc.

Fabrigar, L.R., Wegeners, D.T. & MacCallum, R.C. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods 4, 272-299.

Kamalski, D.M., Hoekstra, C.E. & VanZanten, B.G. (2010). Measuring disease-specific health-related quality of life to evaluate treatment outcomes in tinnitus patients: A systematic review. Otolaryngology Head and Neck Surgery 143, 181-185.

Meikle, M.B. (1992). Methods for Evaluation of Tinnitus Relief Procedures. In J.M. Aran & R. Dauman (Eds.) Tinnitus 91: Proceedings of the Fourth International Tinnitus Seminar (pp. 555-562). New York, NY: Kugler Publications.

Meikle, M.B., Henry, J.A., Griest, S.E., Stewart, B.J., Abrams, H.B., McArdle, R., Myers, P.J., Newman, C.W., Sandridge, S., Turk, D.C., Folmer, R.L., Frederick, E.J., House, J.W., Jacobson, G.P., Kinney, S.E., Martin, W.H., Nagler, S.M., Reich, G.E., Searchfield, G., Sweetow, R. & Vernon, J.A. (2012). The Tinnitus Functional Index:  Development of a new clinical measure for chronic, intrusive tinnitus. Ear & Hearing 33(2), 153-176.

Moran, L.A., Guyatt, G.H. & Norman, G.R. (2001). Establishing the minimal number of items for a responsive, valid, health-related quality of life instrument. Journal of Clinical Epidemiology 54, 571-579.

Newman, C.W., Jacobson, G.P. & Spitzer, J.B. (1996). Development of the Tinnitus Handicap Inventory. Archives of Otolaryngology Head and Neck Surgery 122, 143-148.

Stouffer, J.L. & Tyler, R. (1990). Characterization of tinnitus by tinnitus patients. Journal of Speech and Hearing Disorders 55, 439-453.

Tyler, R. & Baker, L.J. (1983). Difficulties experienced by tinnitus sufferers. Journal of Speech and Hearing Disorders 48, 150-154.

The Top 5 Audiology Research Articles from 2012

2012 was an impressive year for scientific publication in audiology research and hearing aids. Narrowing the selection to 15 or 20 articles was far easier than selecting 5 top contenders. After some thought and discussion, here is our selection of the top 5 articles published in 2012.


1. Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss

Cox, R.M., Johnson, J.A., & Alexander, G.C. (2012). Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss. Ear and Hearing, 33, 573-587.

This article is the second in a series that investigated relationships between cochlear dead regions and benefits received from hearing aids. A sample of patients, diagnosed with high-frequency cochlear dead regions, demonstrated superior outcomes when prescribed hearing aids with a broadband response; as compared to a response that limited audibility at 1,000 Hz. These findings clearly illustrate that patients with cochlear dead regions benefit from—and prefer—amplification at frequencies similar to those with diagnosed cochlear dead regions.

http://journals.lww.com/ear-hearing/Abstract/2012/09000/Implications_of_High_Frequency_Cochlear_Dead.2.aspx

2. The speech intelligibility index and the pure-tone average as predictors of lexical ability in children fit with hearing aids

Stiles, D.J., Bentler, R.A., & McGregor, K.K. (2012). The speech intelligibility index and the pure-tone average as predictors of lexical ability in children fit with hearing aids. Journal of Speech Language and Hearing Research, 55, 764-778.

The pure-tone threshold is the most commonly referenced diagnostic information when counseling families of children with hearing loss. This study compared the predictive value of pure-tone thresholds and the aided speech intelligibility index for a group of children with hearing loss. The aided speech intelligibility index proved to be a stronger predictor of word recognition, word repetition, and vocabulary. These observations suggest that a measure of aided speech intelligibility index is useful tool in hearing aid fitting and family counseling.

http://jslhr.asha.org/cgi/content/abstract/55/3/764

3. NAL-NL2 Empirical Adjustments

Keidser, G., Dillon, H., Carter, L., & O’Brien, A. (2012). NAL-NL2 Empirical Adjustments. Trends in Amplification, 16(4), 211-223.

The NAL-NL2 relies on several psychoacoustic models to derive target gains for a given hearing loss. Yet, it is well understood that these models are limiting and do not account for many individual factors. The inclusion of empirical adjustments to the NAL-NL2 highlights several factors that should be considered for prescribing gain to hearing aid users.

http://tia.sagepub.com/content/16/4/211.abstract

4. Initial-fit approach versus verified prescription: Comparing self-perceived hearing aid benefit

Abrams, H.B., Chisolm, T.H., McManus, M., & McArdle, R. (2012). Initial-fit approach versus verified prescription: Comparing self-perceived hearing aid benefit. Journal of the American Academy of Audiology, 23(10), 768-778.

While the outcomes of this study were not surprising, similar data had not been published in the refereed literature. The authors show that patients fit to a prescriptive target (i.e. NAL-NL1) report significantly better outcomes than patients fit to the lower gain targets that are offered in fitting softwares as ‘first-fit’ prescriptions. This study is a testimonial to the importance of counseling patients regarding audibility and the necessity of real-ear measurement to ensure audibility.

http://aaa.publisher.ingentaconnect.com/content/aaa/jaaa/2012/00000023/00000010/art00003

5. Conducting qualitative research in audiology: A tutorial

Knudsen, L.V., Laplante-Levesque, A., Jones, L., Preminger, J.E., Nielsen, C., Lunner, T., Hickson, L., Naylor, G., & Kramer, S.E. (2012). Conducting qualitative research in audiology: A tutorial. International Journal of Audiology, 51, 83-92.

A substantive majority of the audiologic research literature reports on quantitative data, discussing group outcomes and average trends. The challenges faced in capturing individual differences and clearly documenting field experiences require a different approach to data collection and analysis. Qualitative analysis leverages data from transcribed interviews or subjective reports to probe these anecdotal reports. This tutorial paper described methods for qualitative analysis and cites existing studies that have used these analyses.

http://informahealthcare.com.ezproxylocal.library.nova.edu/doi/abs/10.3109/14992027.2011.606283

Summarizing the Benefits of Bilateral Hearing Aids

Mencher, G.T. & Davis, A. (2006). Bilateral of unilateral amplification: is there a difference? A brief tutorial. International Journal of Audiology 45 (S1), S3-11.

This editorial discusses the clinical implications of an independent research study. This editorial does not represent the opinions of the original authors. 

The decision to fit binaural hearing loss with bilateral hearing aids is influenced by a number of factors. The recommendation of two hearing aids may be contraindicated for financial reasons or because of near-normal hearing or profound loss in one ear, but the consensus among clinicians is that bilateral amplification is preferable for individuals with aidable hearing loss in both ears. Mencher and Davis examine a variety of considerations that may affect bilateral benefit, including speech intelligibility in noise, localization and directionality, sound quality, tinnitus suppression, binaural integration and auditory deprivation. Research in these areas is discussed with reference to clinical indications for hearing aid fitting.

The authors begin with a clarification of the terms binaural and bilateral. They explain that a bilateral fitting refers to the use of hearing aids on both ears, whereas binaural hearing refers to the integration of signals arriving at two ears independently. They point out that standardization of these terms should help avoid confusion in the discussion of bilateral versus unilateral hearing aid fittings.

Because speech is the most important mode of everyday communication, studies of hearing aid benefit typically employ speech intelligibility measures in quiet and noisy conditions. Early studies investigating aided speech intelligibility yielded conflicting reports, with some in favor of bilateral amplification (Markle & Aber, 1958; Wright & Carhart, 1960; Olsen & Carhart, 1967; Markides, 1980) and others showing no difference between unilateral and bilateral fittings (Hedgecock & Sheets, 1958; DeCarlo & Brown, 1960; Jerger & Dirks, 1961).

Many early studies were criticized for methodological choices that could have obscured bilateral benefits such as the use of a single noise source or test materials that were not representative of everyday, conversational speech. More recent work has examined speech intelligibility under conditions that more closely approximate real-world conditions, with multiple noise sources and sentence-based test materials. For instance, Kobler & Rosenhall (2002) studied intelligibility and localization for randomly presented speech from locations surrounding the listener, in the presence of speech-weighted noise from multiple loudspeakers.  They found that bilateral amplification improved performance over unilateral fittings and unaided conditions. Their findings confirmed earlier work by Kobler, Rosenhall and Hansson (2001) in which bilateral benefits were reported for speech recognition, localization and sound quality; as well as more investigations such as that of McArdle and colleagues (2012).

Sound localization has implications for everyday environmental awareness as well as speech perception. Studies of auditory scene analysis (Bregman, 1990) underscore the importance of localization for identifying and attending to specific sound sources. This has specific relevance for understanding conversation in complex environments in which the speech must first be identified and separated from competing sound sources before higher level processing can occur (Stevens, 1996). Therefore, the effect of hearing aids on localization is likely to impact an  individual’s overall ability to understand conversational speech in a noisy environment.

The physical presence of a hearing aid and earmold obscures some pinna-based localization cues; the use of bilateral hearing aids should aid horizontal localization under some circumstances. Individuals with moderate to severe hearing loss who wear only one hearing aid may hear some sounds only on the aided side, whereas binaural time and intensity cues may be preserved in a bilateral fitting. Current hearing aids with bilateral data exchange that can account for interaural phase and time cues may offer additional binaural localization benefits. Individuals in social situations are likely to be conversing with one or more people at roughly the same vertical elevation. Therefore, preservation of horizontal localization cues with bilateral hearing aids may outweigh the loss of pinna cues and may have more relevance for speech intelligibility, especially in noisy conditions.

Sound quality encompasses a number of attributes that include clarity, fullness, loudness and naturalness. Bilateral hearing aid use may improve the quality of these attributes. Balfour & Hawkins (1992) examined eight sound quality judgments for listeners with mild to moderate hearing loss, tested with unilateral and bilateral hearing aids.  Subjects judged the sound quality of speech in quiet and noise, and music in a test booth, living room and concert hall. Subjects had a significant preference for bilateral hearing aids for all sound quality dimensions, with clarity being ranked as the most important.  This finding is in agreement with Erdman & Sedge (1981) who reported that clarity was the most significant benefit of bilateral amplification. Naidoo & Hawkins (1997) reported bilateral benefits for sound quality and speech intelligibility in high levels of background noise.

Tinnitus suppression is another area in which bilateral hearing aid use appears to offer an advantage over unilateral fittings. A questionnaire by Brooks & Bulmer (1981) found that 66.52% of bilaterally aided respondents experienced reduction in tinnitus versus only 12.7% with unilateral aids. Surr, Montgomery & Mueller (1985) reported that about half of their subjects with tinnitus experienced partial or total relief from tinnitus with hearing aid use. Melin, et al. (1987) found that there were differences in tinnitus relief based on the number of hours of use per day. Taken together, these studies suggest that individuals who suffer from tinnitus may experience from relief with hearing aids and are more likely to do so with consistent, bilateral hearing aid use.

There is evidence to suggest that some individuals experience better results with unilateral fittings. Binaural integration of simultaneous auditory signals in asymmetric hearing loss may have negative implications for bilateral hearing aid use. This was first described by Arkebauer, Mencher & McCall (1971) who reported that amplified signals presented to two asymmetrically impaired ears resulted speech discrimination that was worse than the better ear alone and similar to the poorer ear alone. Hood & Prasher (1990) simulated bilateral hearing loss and found poorer speech discrimination ability when dissimilar distortion patterns were sent to each ear and significant improvement when identical distortion patterns were sent to the two ears. The results were interpreted to suggest that an inability to process incongruent or dissimilar speech input from both ears could contribute to the rejection of two hearing aids.  Jerger et al. (1993) reported similar findings and explained that stimulation of the poor ear was interfering with the response of the better ear. He posited that binaural interference could affect approximately 10% of elderly hearing aid users. Binaural interference may be caused by age-related atrophy or corpus collosum demyelination resulting in poor inter-hemispheric transfer of auditory information (Chmiel et al. 1997) and individuals experiencing binaural interference may be likely to perform better with one hearing aid.

Auditory deprivation is commonly cited when recommending bilateral hearing aids. First described in 1984 by Silman, Gelfand and Silverman, it was noted that in unilateral fittings on bilaterally impaired individuals, speech discrimination in the unaided ear was reduced relative to the aided ear. Pure tone thresholds and speech reception thresholds were not affected. Gelfand, Silman & Ross (1987) again found reduced speech recognition scores over time (4-17 years) for unilaterally aided individuals. Hurley (1999) also reported that unilaterally aided subjects were more likely to experience monaural reductions in word recognition scores compared to bilaterally aided subjects. Subsequent studies determined that the auditory deprivation effect was reversible and subjects who were later fit with a second aid experienced improved word recognition (Silverman & Silman, 1990; Silverman & Emmer, 1993; Silman, et al., 1992).  Byrne & Dirks (1996) expanded on the concept of auditory deprivation, reporting that it may also affect localization and intensity discrimination. Though more research in this area may be warranted, Mencher & Davis note that the best way to treat auditory deprivation is to avoid it, with bilateral amplification as part of the solution.

Though research provides insight into possible predictors of success, an important measure of success can be found in post-fitting reports of unilateral and bilateral hearing aid users.  Self-assessments of hearing handicap and disability can help hearing aid users express how their hearing aids affect important activities and everyday communication. Some investigations using self-assessment techniques have revealed a preference for bilateral hearing aid use (Chung & Stephens, 1983, 1986; Stephens et al, 1991) whereas others have revealed a preference for unilateral fittings (Cox, 2011).  Because many factors contribute to an individual’s preferences and perceived success, self-assessments should be used in combination with verification measures and consideration of individual attributes, such as age, experience with hearing aids, audiometric configuration and speech discrimination ability. Most patient questionnaires target specific topics such as satisfaction, comfort, usage patterns and speech intelligibility, so it may be useful to combine measures to gain comprehensive information about a patient’s experience. The Speech, Spatial and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004), for instance, measures hearing disability in a variety of circumstances. Because it also examines directional, distance and spatial perception, the SSQ may provide more insight into the effect of bilateral versus unilateral amplification on hearing in everyday situations.

Mencher and Davis’s review suggests that there are numerous likely benefits to bilateral amplification for most, but not necessarily all, individuals. Bilateral hearing aids may offer an improved signal-to-noise ratio, reduced annoyance from tinnitus, improved sound quality and better localization in complex listening environments. Hearing aids are typically dispensed with a trial period of 30-60 days with relatively low financial risk to the patient in the event of a return. Therefore, it seems sensible to recommend bilateral fittings for candidates with bilateral hearing loss, knowing that a return of one hearing aid will be possible if contraindications to bilateral hearing aid use should arise during the trial period. Close monitoring of performance and comfort during the trial is essential, especially for individuals with asymmetrical hearing loss or a history of unilateral hearing aid use. In these cases, it may be necessary to reduce gain and output or increase compression in the poorer or previously unaided ear, to accommodate the likely inter-aural differences in acclimatization rate. Ears with more hearing loss and/or less aided experience generally take more time to adapt to amplification, so gradual adjustment and focused counseling may be necessary to eventually achieve satisfactory binaural balance.

The authors conclude by pointing out that the only way to know if a patient is successful with their hearing aids is to ask them! The interaction of several factors such as sound quality, localization, noise tolerance, loudness discomfort and physical comfort will contribute to patient satisfaction. Ultimately, clinicians should develop a clinical strategy that employs objective and subjective measures to truly document benefit and satisfaction with the hearing aid fitting—be it unilateral or bilateral.

References

Arkebauer, H.J., Mencher, G.T. & McCall, C. (1971). Modification of speech discrimination in patients with binaural asymmetrical hearing loss. Journal of Speech and Hearing Disorders 36, 208-212.

Balfour, P.B. & Hawkins, D.B. (1992). A comparison of sound quality judgments for monaural and binaural hearing aid processed stimuli. Ear and Hearing 13, 331-339.

Bregman, A.S. (1990). Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, Mass.: Bradford Books, MIT Press.

Brooks, D.N. & Bulmer, D. (1981). Survey of Binaural Hearing Aid Users. Ear and Hearing 2, 220-224.

Byrne, D., Noble, W. & Lepage, B. (1992). Effects of long-term bilateral and unilateral fitting of different hearing aid types on the ability to locate sounds. Journal of the American Academy of Audiology 3, 369-382.

Chmiel, R., Jerger, J., Murphy, E., Pirozzolo, F. & Tooley-Young, C. (1997). Unsuccessful use of binaural amplification by an elderly person. Journal of the American Academy of Audiology 8, 1-10.

Chung, S. & Stephens, S.D.G. (1983). Binaural hearing aid use and the hearing measurement scale. IRCS Medical Science, 11:721-722. In W. Noble, 1998. Self-Assessment of Hearing and Related Functions, (London: Whurr).

Chung, S. & Stephens, S.D.G. (1986). Factors influencing binaural hearing aid use. British Journal of Audiology 20, 129-140.

Cox, R.M., Schwartz, K.S., Noe, C.M. & Alexander, G.C. (2011). Preference for one or two hearing aids among adult patients. Ear and Hearing 32 (2), 181-197

DeCarlo, L.M. & Brown, W.J. (1960). The effectiveness of binaural hearing for adults with hearing impairment. Journal of Auditory Research 1, 35-76.

Erdman, S. & Sedge, R. (1981). Subjective comparisons of binaural versus monaural amplification. Ear and Hearing 2, 225-229.

Gatehouse, S. & Noble, W. (2004). The Speech, Spatial and Qualities of Hearing Scale (SSQ). International Journal of Audiology 43, 85-99.

Gelfand, S., Silman, S. & Ross, L. (1987). Long term effects of monaural, binaural and no amplification in subjects with bilateral hearing loss. Scandinavian Audiology 16, 201-207.

Hebrank, J. & Wright, D. (1974). Spectral cues used in the localization of sound sources on the median plane. Journal of the Acoustical Society of America 56, 1829-1834.

Hedgecock, L.D.  & Sheets, B.V. (1958).  A comparison of monaural and binaural hearing aids for listening to speech. Archives of Otolaryngology 68, 624-629.

Hood, J.D. & Prasher, D.K. (1990). Effect of simulated bilateral cochlear distortion on speech discrimination in normal subjects. Scandinavian Audiology 19, 37-41.

Hurley, R.M. (1999) . Onset of auditory deprivation. Journal of the American Academy of Audiology 10, 529-534.

Jerger, J. & Dirks, D. (1961). Binaural hearing aids: An enigma. Journal of the Acoustical Society of America 33, 537-538.

Jerger, J., Silman, S., Lew, H.L. & Chmiel, R. (1993). Case studies in binaural interference: converging evidence from behavioral and electrophysiological measures. Journal of the American Academy of Audiology, 122-131.

Kobler, S., Rosenhall, U., & Hansson, H. (2001) . Bilateral hearing aids – effects and consequences from a user perspective. Scandinavian Audiology 30, 223-235.

Kubler, S. & Rosenhall, U. (2002). Horizontal localization and speech intelligibility with bilateral and unilateral hearing aid amplification. International Journal of Audiology 41, 392-400.

Markides,  A. (1980). Binaural Hearing Aids. NY: Academic Press.

Markle,  D.M. & Aber, W. (1958). A clinical evaluation of monaural and binaural hearing aids. Archives of Otolaryngology 67, 606-608.

McArdle, R., Killion, M., Mennite, M. & Chisolm, T. (2012).  Are Two Ears Not Better Than One? Journal of the American Academy of Audiology 23, 171-181.

Mehrgardt, S. & Mellert, V. (1977). Transformational characteristics of the external human ear. Journal of the Acoustical Society of America 61, 1567-1576.

Melin, L., Scott, B., Lindberg, P. & Lyttkens, L. (1987). Hearing aids and tinnitus – an experimental group study. British Journal of Audiology 21, 91-97.

Mencher, G.T. & Davis, A. (2006). Bilateral of unilateral amplification: is there a difference? A brief tutorial. International Journal of Audiology 45 (S1), S3-11.

Middlebrooks, J.C. & Green, D.M. (1991). Sound localization by human listeners. Annual Review of Psychology 42, 135-159.

Naidoo, S.V. & Hawkins, D.B. (1997). Monaural/binaural preferences: effect of hearing aid circuit on speech intelligibility and sound quality. Journal of the American Academy of Audiology 8, 188-202.

Noble, W., Sinclair, S. & Byrne, D. (1998). Improvement in aided sound localization with open earmolds: Observations in people with high-frequency hearing loss. Journal of the American Academy of Audiology 9, 25-34.

Olsen, W.R. & Carhart, R. (1967). Development of test procedures for evaluation of binaural hearing aids. Bulletin of Prosthetics Research 10, 22-49.

Searle, C., Braida, L., Cuddy, D. & Davis, M. (1975).  Binaural pinna disparity: Another auditory localization cue. Journal of the Acoustical Society of America 57, 448-455.

Silverman, C & Emmer, M.B. (1993). Auditory deprivation and recovery in adults with asymmetric sensorineural hearing impairment. Journal of the American Academy of Audiology 4, 338-346.

Silverman, C. & Silman, S. (1990).  Apparent auditory deprivation from monaural amplification and recovery with binaural amplification: 2 case studies. Journal of the American Academy of Audiology 1, 175-180.

Silman, S., Gelfand, S. & Silverman, C. (1984). Late-onset auditory deprivation: effects of monaural versus binaural hearing aids. Journal of the Acoustical Society of America 76, 1357-1362.

Silman, S., Silverman, C.A., Emmer, M.B. & Gelfand, S. (1992).  Adult-onset auditory deprivation. Journal of the American Academy of Audiology 3, 390-396.

Stephens, S.D., Callaghan, D.E., Hogan, S., Meredith, R., Rayment, A. & Davis, A.C. (1991) . Acceptability of binaural hearing aids: a crossover study. Journal of the Royal Society of Medicine 84, 267-269.

Stevens, K. (1996). Amplitude-modulated and unmodulated time-varying sinusoidal sentences: the effects of semantic and syntactic context. Doctoral dissertation, Northwestern University (University of Michigan Press, AAT 9632785).

Surr, R.K., Montgomery, A.A. & Mueller, H.G. (1985). Effect of amplification on tinnitus among new hearing aid users. Ear and Hearing 6, 71-75.

Wright, H.N. & Carhart, R. (1960). The efficiency of binaural listening among the hearing impaired. Archives of Otolaryngology 72, 789-797.

Transitioning the Patient with Severe Hearing Loss to New Hearing Aids

Convery, E., & Keidser, G. (2011). Transitioning hearing aid users with severe and profound loss to a new gain/frequency response: benefit, perception and acceptance. Journal of the American Academy of Audiology. 22, 168-180.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Hearing Technologies. This editorial does not represent the opinions of the original authors. 

Many individuals with severe-to-profound hearing loss are full-time, long-term hearing aid users. Because they rely heavily on their hearing aids for everyday communication, they are often reluctant to try new technology. It is common to see patients with severe hearing loss keep a set of hearing aids longer than those with mild-to-moderate losses. These older hearing aids offered less effective feedback suppression and a narrower frequency range than those available today now. The result was that many severely-impaired hearing aid users were fitted with inadequate high-frequency gain and compensatory increases in low-mid frequency amplification.  Having adapted to this frequency response, they may reject new hearing aids with increased high-frequency gain, stating that they sound too tinny or unnatural. Similarly, those who have adjusted to linear amplification may reject wide-dynamic-range compression (WDRC) as too soft, even though it the strategy may provide some benefits when compared to their linear hearing aids.

Convery and Keidser evaluated a method to gradually transition experienced, severely impaired hearing aid users into new amplification characteristics. They measured subjective and objective outcomes as subjects took incremental steps toward a more appropriate frequency response. Twenty-three experienced, adult hearing aid users participated in the study.   Participation was limited to subjects whose current gain and frequency response differed significantly from targets based on NAL-RP, a modification of the NAL formula for severe to profound hearing losses (Byrne, et al 1991).  Most subjects’ own instruments had more gain at 250-2000Hz and less gain at 6-8 kHz compared to NAL-RP targets, so the experimental transition involved adapting to less low and mid-frequency gain and more high frequency gain.

Subjects in the experimental group were fitted bilaterally with WDRC behind-the-ear hearing instruments. Directional microphones, noise reduction and automatic features were turned off and volume controls were activated with an 8dB range. The hearing aids had two programs: the first, called the “mimic” program,  had a gain/frequency response adjusted to match the subject’s current hearing aids. The second program was set to NAL-RP targets.  MPO was the same for mimic and NAL-RP programs. The programs were not manually accessible for the user, they were only adjusted by the experimenters at test sessions.

Four incremental programs were created for each participant in the experimental group. Each step was approximately a 25% progression from their mimic program frequency response to the NAL-RP prescribed response. At 3 week intervals, they were switched to the next incremental program, approaching NAL-RP settings as the experiment progressed.  The programs in the control group’s hearing aids remained consistent for the duration of the study.

All subjects attended 8 sessions. At the initial session, subjects’ own instruments were measured in a 2cc coupler and RECD measurements were obtained with their own earmolds. The experimental hearing aids were fitted at the next session and subjects returned for follow-up sessions at 1 week post-fitting and 3 week intervals thereafter until 15-weeks post-fitting.

Subjects evaluated the mimic and NAL-RP programs in paired comparisons at 1 week and 15 weeks post-fitting. The task used live dialogues with female talkers in four everyday environments: café, office, reverberant stairwell and outdoors with traffic noise in the background. Hearing aid settings were switched from mimic to NAL-RP with a remote control, without audible program change beeps, so subjects were unaware of their current program. They were asked to indicate their preference for one program over the other on a 4-point scale: no difference, slightly better, moderately better or much better.

Speech discrimination was evaluated with the Beautifully Efficient Speech Test (BEST; Schmitt, 2004) which measured the aided SRT for sentence stimuli. Loudness scaling was then conducted to determine the most comfortable loudness level and range (MCL/R).  Finally, subjects responded to a questionnaire concerning overall loudness comfort, speech intelligibility, sound quality, use of the volume control, use of their own hearing aids and perceived changes in audibility and comfort.  Speech discrimination, loudness scaling and questionnaire administration took place for all participants at 3 week intervals, starting at the 3 week post-fitting session.

One goal of the study was to determine if there would be a change in speech discrimination over time or a difference between the experimental and control groups. Analysis of BEST SRT scores yielded no significant difference between the experimental and control groups, nor was there a significant change in SRT over time. There was a significant interaction between these variables, indicating that the experimental group demonstrated slightly poorer SRT scores over time, whereas the control group’s SRTs improved slightly over time.

Subjects rated perceptual disturbance, or how much the hearing aid settings in the current test period differed from the previous period and how disturbing the difference was. There was no significant effect for the experimental or control groups, but there was a tendency for reports of perceptual disturbance over time to decrease for the control group and increase for the experimental group. The mimic programs for the control group were consistent, so control subjects likely became acclimated over time. The experimental group, however, had incremental changes to their mimic program at each session, so it is not surprising that they reported more perceptual disturbance. This was only a slight trend, however, indicating that even the experimental group experienced relatively little disturbance as their hearing aids  approached NAL-RP targets.

Analysis of the paired comparison responses indicated a significant overall preference for the mimic program over the NAL-RP program. There was an interaction between environment and listening program, showing a strong preference for the mimic program in office and outdoor environments and somewhat less of a preference in the café and stairwell environments. When asked about their criteria for the comparisons, subjects most commonly cited speech clarity, loudness comfort and naturalness, regardless of whether mimic fit or NAL-RP was preferred.  There was no significant effect of time on program preference, but there was a slight increase in the control group’s preference for mimic at the end of the study, whereas the experimental group shifted slightly toward NAL-RP, away from mimic.

Over the course of the study, Convery and Keidser’s subjects demonstrated acceptance of new frequency responses with less low- to mid-frequency gain and more high frequency gain than their current hearing aids. No significant differences were noted between experimental and control groups for loudness, sound quality, voice quality, intelligibility or overall performance, nor did these variables change significantly over time. Though all subjects preferred the mimic program overall, there was a trend for the experimental group to shift slightly toward a preference for the NAL-RP settings, whereas the control group did not. This indicates that the experimental subjects had begun to acclimate to the new, more appropriate frequency response. Acclimatization might have continued to progress, had the study examined performance over a longer period of time. Prior research indicates that acclimatization to new hearing aids can progress over the course of several months and individuals with moderate and severe losses may require more time to adjust than individuals with milder losses (Keidser et al, 2008).

Reports of perceptual disturbance increased as incremental programs approached NAL-RP settings. This may not be surprising to clinicians, as hearing aid patients often require a period of acclimatization even after relatively minor changes to their hearing aid settings. Furthermore, clinical observation supports the suggestion that individuals with severe hearing loss may be even more sensitive to small changes in their frequency response. Allowing more than three weeks between program changes may result in less perceptual disturbance and easier transition to the new frequency response. Clinically, perceptual disturbance with a new frequency response can also be mitigated by counseling and encouraging patients that they will feel more comfortable with the new hearing aids as they progress through their trial periods.  It might also be helpful to extend the trial period (which is usually 30-45 days) for individuals with severe to profound hearing losses, to accommodate an extended acclimatization period.

Individuals with severe-to-profound hearing loss often hesitate to try new hearing aids.  Similarly, audiologists may be reluctant to recommend new instruments with WDRC or advanced features for fear that they will be summarily rejected. Convery and Keidser’s results support a process for transitioning experienced hearing aid users into new technology and suggest an alternative for clinicians who might otherwise hesitate to attempt departures from a patient’s current frequency response.

Because this was a double-blind study, the research audiologists were unable to counsel subjects as they would in a typical clinical situation.  The authors note that counseling during transition is of particular importance for severely impaired hearing aid users, to ensure realistic expectations and acceptance of the new technology. Though the initial fitting may approximate the client’s old frequency response, follow-up visits at regular intervals should slowly implement a more desirable frequency response.  Periodically, speech discrimination and subjective responses should be evaluated and the transition should be stopped or slowed if decreases in intelligibility or perceptual disturbances are noted.

In addition to changes in the frequency response, switching to new hearing aid technology usually means the availability of unfamiliar features such as directional microphones, noise reduction and many wireless features. Special features such as these can be introduced after the client acclimates to the new frequency response, or they can be relegated to alternate programs to be used on an experimental basis by the client. For instance, automatic directional microphones are sometimes not well-received by individuals who have years of experience with omnidirectional hearing aids. By offering directionality in an alternate program, the individual can test it out as needed and may be less likely to reject the feature or the hearing aids.  It is critical to discuss proper use of the programs and to set up realistic expectations.  Because variable factors such as frequency resolution and sensitivity to incremental amplification changes may affect performance and acceptance, the transition period should be tailored to the needs of the individual and monitored closely with regular follow-up appointments.

References

Baer, T., Moore, B.C.J. & Kluk, K. (2002).  Effects of low pass filtering on the intelligibility of speech in noise for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America. 112, 1133-1144.

Barker, C., Dillon, H. & Newall, P. (2001). Fitting low ratio compression to people with severe and profound hearing losses. Ear and Hearing. 22, 130-141.

Byrne, D., Parkinson, A. & Newall, P. (1991).  Modified hearing aid selection procedures for severe/profound hearing losses. In: Studebaker, G.A. , Bess, F.H., Beck, L. eds. The Vanderbilt Hearing Aid Report II. Parkton, MD: York Press, 295-300.

Ching, T.Y.C., Dillon, H., Lockhart, F., vanWanrooy, E. & Carter, L. (2005). Are hearing thresholds enough for prescribing hearing aids? Poster presented at the 17th Annual American Academy of Audiology Convention and Exposition, Washington, DC.

Convery, E. & Keidser, G. (2011). Transitioning hearing aid users with severe and profound loss to a new gain/frequency response: benefit, perception and acceptance. Journal of the American Academy of Audiology. 22, 168-180.

Flynn, M.C., Davis, P.B. & Pogash, R. (2004). Multiple-channel non-linear power hearing instruments for children with severe hearing impairment: long-term follow-up. International Journal of Audiology. 43, 479-485.

Keidser, G., Hartley, D. & Carter, L. (2008). Long-term usage of modern signal processing by listeners with severe or profound hearing loss: a retrospective survey. American Journal of Audiology. 17, 136-146.

Keidser, G., O’Brien, A., Carter, L., McLelland, M., and Yeend, I. (2008) Variation in preferred gain with experience for hearing-aid users. International Journal of Audiology. 47(10), 621-635.

Kuhnel, V., Margolf-Hackl, S. & Kiessling, J. (2001).  Multi-microphone technology for severe to profound hearing loss. Scandanavian Audiology 30 (Suppl. 52), 65-68.

Moore, B.C.J. (2001). Dead regions in the cochlea: diagnosis, perceptual consequences and implications for the fitting of hearing aids. Trends in Amplification. 5, 1-34.

Moore, B.C.J., Killen, T. & Munro, K.J. (2003). Application of the TEN test to hearing-impaired teenagers with severe-to-profound hearing loss. International Journal of Audiology. 42, 465-474.

Schmitt, N. (2004). A New Speech Test (BEST Test). Practical Training Report. Sydney: National Acoustic Laboratories.

Vickers, D.A., Moore, B.C.J. & Baer, T. (2001). Effect of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America. 110, 1164-1175.

Addressing patient complaints when fine-tuning a hearing aid

Jenstad, L.M., Van Tasell, D.J. & Ewert, C. (2003). Hearing aid troubleshooting based on patient’s descriptions. Journal of the American Academy of Audiology 14 (7).

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

As part of any clinically robust protocol, a hearing aid fitting will be objectively verified with real-ear measures and validated with a speech-in-noise test. Fine tuning and follow-up adjustments are an equally important part of the fitting process. This stage of the routine fitting process does not follow standardized procedures and is almost always directed by a patient’s complaints or descriptions of real-world experience with the hearing aids. This can be a challenging dynamic for the clinician. Patients may have difficulty putting their auditory experience into words and different people may describe similar sound quality issues in different ways.  Additionally, there may be several ways to address any given complaint and a given programming adjustment may not have the same effect on different hearing aids.

Hearing aid manufacturers often include a fine-tuning guide or automated fitting assistant within their software to help the clinician make appropriate adjustments for common patient complaints. There are limitations to the effectiveness of these fine tuning guides in that they are inherently specific to a limited range of products and the suggested adjustments are subject to the expertise and resources of that manufacturer.  The manner in which a sound quality complaint is described may differ between manufacturers and the recommended adjustments in response to the complaint may differ as well.

There have been a number of efforts to develop a single hearing aid troubleshooting guide that could be used across devices and manufacturers (Moore et. al., 1998; Gabrielsson et al., 1979, 1988, 1990; Lundberg et al., 1992; Ovegard et al., 1997). The first and perhaps most challenging step toward this goal has been to determine the most common descriptors that patients use for sound quality complaints. Moore (1998) and his colleagues developed a procedure in which responses on three ratings scales (e.g., “loud versus quiet”, “tinny versus boomy”) were used to make adjustments to gain and compression settings. However, their procedure did not allow for the bevy of descriptors that patients create; limiting potential utility for everyday clinical settings.  Gabrielsson colleagues, in a series of Swedish studies, developed a set of reliable terms to describe sound quality. These descriptors have since been translated and used in English language research (Bentler et al., 1993).

As hearing instruments become more complicated with numerous adjustable parameters, and given the wide range of experience and expertise of individuals fitting hearing instruments today, an independent fine tuning guide is an appealing concept. Lorienne Jenstad and her colleagues proposed an “expert system” for troubleshooting hearing aid complaints.  The authors explained that expert systems “emulate the decision making abilities of human experts” (Tharpe et al., 1993).  To develop the system, two primary questions were asked:

1) What terms do hearing impaired listeners use to describe their reactions to specific hearing aid fitting problems?

2) What is the expert consensus on how these patient complaints can be addressed by hearing aid adjustment?

There were two phases to the project. To address question one, the authors surveyed clinicians for their reports on how patients describe sound quality with regard to specific fitting problems. To address question two, the most frequently reported descriptors from the clinicians’ responses were submitted to a panel of experts to determine how they would address the complaints.

The authors sent surveys to 1934 American Academy of Audiology members and received 311 qualifying responses. The surveys listed 18 open-ended questions designed to elicit descriptive terms that patients would likely use for hearing aid fitting problems. For example, the question “If the fitting has too much low-frequency gain…” yielded responses such as “hollow”, “plugged” and “echo”.  The questions probed common problems related to gain, maximum output, compression, physical fit, distortion and feedback.  The survey responses yielded a list of the 40 most frequent descriptors of hearing aid fitting problems, ranked according to the number of occurrences.

The list of descriptors was used to develop a questionnaire to probe potential solutions for each problem.  Each descriptor was put in the context of, “How would you change the fitting if your patient reports that ___?”, and 23 possible fitting solutions were listed.  These questionnaires were completed by a panel of experts with a minimum of five years of clinical experience. Respondents could offer more than one solution to a problem and the solutions were weighted based on the order in which they were offered. There was strong agreement among experts, suggesting that their responses could be used reliably to provide troubleshooting solutions based on sound quality descriptions. The expert responses also agreed with the initial survey that was sent to the group of 1934 audiologists, supporting the validity of these response sets.

The expert responses resulted in a fine-tuning guide in the form of tables or simplified flow charts. The charts list individual descriptors with potential solutions listed below in the order in which they should be attempted.  For example, below the descriptor “My ear feels plugged”, the first solution is to “increase vent” and the second is to “decrease low frequency gain”. The idea is that the clinician would first try to increase the vent diameter and if that didn’t solve the problem, they would move on to the second option, decreasing low frequency gain. If an attempted solution creates another sound quality problem, the table can be utilized to address that problem in the same way.

The authors correctly point out that there are limitations to this tool and that proposed solutions will not necessarily have the same results with all hearing aids. For instance, depending on the compressor characteristics, raising a kneepoint might increase OR decrease the gain at input levels below the kneepoint. It is up to the clinician to be familiar with a given hearing aid and its adjustable parameters to arrive at the appropriate course of action.

Beyond manipulation of the hearing aid itself, the optimal solution for a particular patient complaint might not be the first recommendation in any tuning guide. For instance, for the fitting problem labeled “Hearing aid is whistling”, the fourth solution listed in the table is “check for cerumen”.  This solution appeared fourth in the ranking based on the frequency of responses from the experts on the panel. However, any competent clinician who encounters a patient with hearing aid feedback should check for cerumen first before considering programming modifications.

The expert system proposed by Jenstad and her colleagues represents a thoroughly examined, reliable step toward development of a universal troubleshooting guide for clinicians. Their paper was published in 2003, so some items should be updated to suit modern hearing aids. For example, current feedback management strategies result in fewer and less challenging feedback problems.  Solutions for feedback complaints might now include, “calibrate feedback management system” versus gain or vent adjustments. Similarly, most hearing aids now have solutions for listening in noise that extend beyond the simple inclusion of directional microphones, so “directional microphone” might not be an appropriately descriptive solution to address complaints about hearing in noise, as the patient is probably already using a directional microphone.

Overall, the expert system proposed by Jenstad and colleagues is a helpful clinical tool; especially if positioned as a guide to help patients find the appropriate terms to describe their perceptions. However, as the authors point out, it is not meant to replace prescriptive methods, measures of verification and validation, or the expertise of the audiologist. The responsibility is with the clinician to be informed about current technology and its implications for real world hearing aid performance and to communicate with their patients in enough detail to understand their patients’ comments and address them appropriately.

References

Bentler, R.A., Nieburh, D.P., Getta, J.P. & Anderson, C.V. ( 1993). Longitudinal study of hearing aid effectiveness II: subjective measures. Journal of Speech and Hearing Research 36, 820-831.

Jenstad, L.M., Van Tasell, D.J. & Ewert, C. (2003). Hearing aid troubleshooting based on patient’s descriptions. Journal of the American Academy of Audiology 14 (7).

Moore, B.C.J., Alcantara, J.I. & Glasberg, B.R. (1998). Development and evaluation of a procedure for fitting multi-channel compression hearing aids. British Journal of Audiology 32, 177-195.

Gabrielsson A. ( 1979). Dimension analyses of perceived sound quality of sound-reproducing systems. Scandinavian Journal of Psychology 20, 159-169.

Gabrielsson, A., Hagerman, B., Bech-Kristensen, T. & Lundberg, G. (1990). Perceived sound quality of reproductions with different frequency responses and sound levels. Journal of the Acoustical Society of America 88, 1359-1366.

Gabrielsson, A. Schenkman, B.N. & Hagerman, B. (1988). The effects of different frequency responses on sound quality judgments and speech intelligibility. Journal of Speech and Hearing Research 31, 166-177.

Lundberg, G., Ovegard, A., Hagerman, B., Gabrielsson, A. & Brandstom, U. (1992). Perceived sound quality in a hearing aid with vented and closed earmold equalized in frequency response. Scandinavian Audiology 21, 87-92.

Ovegard, A., Lundberg, G., Hagerman, B., Gabrielsson, A., Bengtsson, M. & Brandstrom, U. (1997). Sound quality judgments during acclimatization of hearing aids. Scandinavian Audiology 26, 43-51.

Schweitzer, C., Mortz, M. & Vaughan, N. (1999). Perhaps not by prescription – but by perception. High Performance Hearing Solutions 3, 58-62.

Tharpe, A.M., Biswas, G. & Hall, J.W. (1993). Development of an expert system for pediatric auditory brainstem response interpretation. Journal of the American Academy of Audiology 4, 163-171.

Recommendations for fitting patients with cochlear dead regions

Cochlear Dead Regions in Typical Hearing Aid Candidates:

Prevalence and Implications for Use of High-Frequency Speech Cues

Cox, R.M., Alexander, G.C., Johnson, J. & Rivera, I. (2011).  Cochlear dead regions in typical hearing aid candidates: Prevalence and implications for use of high-frequency speech cues. Ear & Hearing 32 (3), 339-348.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Audibility is a well-known predictor of speech recognition ability (Humes, 2007) and audibility of high-frequency information is of particular importance for consonant identification.  Therefore, audibility of high-frequency speech cues is appropriately regarded as an important element of successful hearing aid fittings (Killion & Tillman, 1982; Skinner & Miller, 1983). In contrast to this expectation, some studies have reported that high-frequency gain might have limited or even negative impact on speech recognition abilities of some individuals (Murray & Byrne, 1986; Ching et al., 1998; Hogan & Turner, 1998). These researchers observed that when high-frequency hearing loss exceeded 55-60dB, some listeners were unable to benefit from increased high-frequency audibility.  A potential explanation for this variability was provided by Brian Moore (2001), who suggested that an inability to benefit from amplification in a particular frequency region could be due to cochlear “dead regions” or regions where there is a loss of inner hair cell functioning.

Moore suggested that hearing aid fittings could potentially be improved if clinicians were able to identify patients with cochlear dead regions (DRs). Working under the assumption that diagnosis DRs may contraindicate high-frequency amplification. He and his colleagues developed the TEN test as a method of determining the presence of cochlear dead regions (Moore et al., 2000, 2004). The advent of the TEN test provided a standardized measurement protocol for DRs, but there is still wide variability in the reported prevalence of DRs. Estimates range from as 29% (Preminger et a., 2005) to as high as 84% (Hornsby & Dundas, 2009), with other studies reporting DR prevalence somewhere in the middle of that range. Several potential factors are likely to contribute to this variability, including degree of hearing loss, audiometric configuration and test technique.

In addition to the variability in reported prevalence of DRs, there is also variability in the reports of how DRs affect the ability to benefit from high-frequency speech cues (Vickers et al., 2001; Baer et al., 2002; Mackersie et al., 2004). It remains unclear as to whether high-frequency amplification recommendations should be modified to reflect the presence of DRs.  Most research is in agreement that as hearing thresholds increase, the likelihood of DRs also increases.  Hearing aid users with severe to profound hearing losses are likely to have at least one DR. Because a large proportion of hearing aid users have moderate to severe hearing losses, Dr. Cox and her colleagues wanted to determine the prevalence of DRs in this population. In addition, they examined the effect of DRs on the use of high-frequency speech cues by individuals with moderate to severe loss.

Their study addressed two primary questions:

1) What is the prevalence of dead regions (DRs) among listeners with hearing thresholds in the 60-90dB range?

2) For individuals with hearing loss in the 60-90dB range, do those with DRs differ from those without DRs in their ability to use high-frequency speech cues?

One hundred and seventy adults with bilateral, flat or sloping sensorineural hearing loss were tested. All subjects had thresholds of 60 to 90dB in the better ear for at least part of the range from 1-3kHz and thresholds no better than 25dB for frequencies below 1kHz. Subjects ranged in age from 38 to 96 years, and 59% of the subjects had experience with hearing aids.

First, subjects were evaluated for the presence of DRs with the TEN test. Then, speech recognition was measured using high-frequency emphasis (HFE) and high-frequency emphasis, low-pass filtered (HFE-LP) stimuli from the QSIN test (Killion et al. 2004). HFE items on this test are amplified up to 32dB above 2.5kHz, whereas the HFE-LP items have much less gain in this range. Comparison of subjects’ responses to these two types of stimuli allowed the investigators to assess changes in speech intelligibility with additional high frequency cues. Presentation levels for the QSIN were chosen by using a loudness scale and bracketing procedure to arrive at a level that the subject considered “loud but okay”. Finally, audibility differences for the two QSIN conditions were estimated using the Speech Intelligibility Index based on ANSI 3.5-1997 (ANSI, 1997).

The TEN test results revealed that 31% of the participants had DRs at one or more test frequencies. Of the 307 ears tested, 23% were found to have a DR for one or more frequencies. Among those who tested positive for DRs, about 1/3 had DRs in both ears and 2/3 had DRs in one ear or the other in equal proportion. Mean audiometric thresholds were essentially identical for the two groups below 1kHz, but above 1kHz thresholds were significantly poorer for the group with DRs than for the group without DRs.  DRs were most prevalent at frequencies above 1.5kHz. There were no age or gender differences.

On the QSIN test, the mean HFE-LP scores were significantly poorer than the mean HFE scores for both groups.  There was also a significant difference in performance based on whether or not the participants had DRs. Perhaps more interestingly, there was a significant interaction between the DR group and test stimuli conditions, in that the additional high-frequency information in the HFE stimuli resulted in slightly greater performance gains for the group without DRs than it did for the group with DRs.  Furthermore, subjects with one or more isolated DRs were more able to benefit from the high frequency cues in the HFE lists than were those subjects with multiple, contiguous DRs. Although there were a few isolated individuals who demonstrated lower scores for the HFE stimuli, the differences were not significant and could have been explained by measurement error. Therefore, the authors conclude that the additional high frequency information in the HFE stimuli was not likely to have had a detrimental effect on performance for these individuals.

As had also been reported in previous studies, subject groups with DRs had poorer mean audiometric thresholds than the groups without DRs, so it was possible that audibility played a role in QSIN performance. Analysis of the audibility of QSIN stimuli for the two groups revealed that high frequency cues in the HFE lists were indeed more audible for the group without DRs. In accounting for this audibility effect, the presence of DRs still had a small but significant effect on performance.

The results of this study suggest that listeners with cochlear DRs still benefit from high frequency speech cues, albeit slightly less than those without dead regions.  The performance improvements were small and the authors caution that it is premature to draw firm conclusions about the clinical implications of this study.  Despite the need for further examination, the results of the current study certainly do not support any reduction in prescribed gain for hearing aid candidates with moderate to severe hearing losses.  The authors acknowledge, however, that because the findings of this and other studies are based on group data, it is possible that specific individuals may be negatively affected by amplification within dead regions. Based on the research to date, this seems more likely to occur in individuals with profound hearing loss who may have multiple, contiguous DRs.

More study is needed to determine the most effective clinical approach to managing cochlear dead regions in hearing aid candidates. Future research should be done with hearing aid users, including for example, the effects of noise on everyday hearing aid performance for individuals with DRs. A study by Mackersie et. al. (2004) showed that subjects with DRs suffered more negatives effects of noise than did the subjects without DRs. If there is a convergence of evidence to this effect, then recommendations about the use of high frequency gain, directionality and noise reduction could be determined as they relate to DRs. For now, Dr. Cox and her colleagues recommend that until there are clear criteria to identify individuals for whom high frequency gain could have deleterious effects, clinicians should continue using best-practice protocols and provide high frequency gain according to current prescriptive methods.

References

ANSI ( 1997). American National Standard Methods for Calculation of the Speech Intelligibility Index (Vol. ANSI S3.5-1997). New York: American National Standards Institute.

Ching,T., Dillon, H. & Byrne, D. (1998). Speech recognition of hearing-impaired listeners: Predictions from audibility and the limited role of high-frequency amplification. Journal of the Acoustical Society of America 103, 1128-1140.

Cox, R.M., Alexander, G.C., Johnson, J. & Rivera, I. (2011).  Cochlear dead regions in typical hearing aid candidates: Prevalence and implications for use of high-frequency speech cues. Ear & Hearing 32 (3), 339-348.

Hogan, C.A. & Turner, C.W. (1998). High-frequency audibility: Benefits for hearing-impaired listeners. Journal of the Acoustical Society of America 104, 432-441.

Humes, L.E. (2007). The contributions of audibility and cognitive factors to the benefit provided by amplified speech to older adults. Journal of the American Academy of Audiology 18, 590-603.

Killion, M. C. & Tillman, T.W. (1982). Evaluation of high-fidelity hearing aids. Journal of Speech and Hearing Research 25, 15-25.

Moore, B.C.J. (2001). Dead regions in the cochlear: Diagnosis, perceptual consequences and implications for the fitting of hearing aids. Trends in Amplification 5, 1-34.

Moore, B.C.J., Huss, M., Vickers, D.A.,  et al. (2000). A test for the diagnosis of dead regions in the cochlea. British Journal of Audiology 34, 2-5-224.

Moore, B.C.J., Glasberg, B.R., Stone, M.A. (2004). New version of the TEN test with calibrations in dB HL. Ear and Hearing 25, 478-487.

Murray, N. & Byrne, D. (1986). Performance of hearing-impaired and normal hearing listeners with various high-frequency cut-offs in hearing aids. Australian Journal of Audiology 8, 21-28.

Skinner, M.W. & Miller, J.D. (1983). Amplification bandwidth and intelligibility of speech in quiet and noise for listeners with sensorineural hearing loss.  Audiology 22, 253-279.

A preferred speech stimulus for testing hearing aids

Development and Analysis of an International Speech Test Signal (ISTS)

Holube, I., Fredelake, S., Vlaming, M. & Kollmeier, B. (2010). Development and analysis of an international speech test signal (ISTS). International Journal of Audiology, 49, 891-903.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Current hearing aid functional verification measures are described in the standards IEC 60118 and ANSI 3.22 and use stationary signals, including sine wave frequency sweeps and unmodulated noise signals. Test stimuli are presented to the hearing instrument and frequency specific gain and output is measured in a coupler or ear simulator.  Current standardized measurement methods require the instrument to be set at maximum or a reference test setting and adaptive parameters such as noise reduction and feedback management are turned off.

These procedures provide helpful information for quality assurance and determining fitting ranges for specific hearing aid models. However, because they were designed for linear, time-invariant hearing instruments, they have limitations for today’s nonlinear, adaptive instruments and cannot provide meaningful information about real-life performance in the presence of dynamically changing acoustic environments.

Speech is the most important stimulus encountered by hearing aid users and nonlinear hearing aids with adaptive characteristics process speech differently than they do stationary signals like sine waves and unmodulated noise. Therefore, it seems preferable for standardized test procedures to use stimuli that are as close as possible to natural speech.  Indeed, there are some hearing aid test protocols that use samples of natural speech or live speech. But natural speech stimuli will have different spectra, fundamental frequencies, and temporal characteristics depending on the speaker, the source material and the language. For hearing aid verification measures to be comparable to each other it is necessary to have standardized stimuli that can be used internationally.

Alternative test stimuli have been proposed based on the long-term average speech spectrum (Byrne et al., 1994) or temporal envelope fluctuations (Fastl, 1987). The International Collegium for Rehabilitative Audiology (ICRA) developed a set of stimuli (Dreschler, 2001) that reflect the long-term average speech spectrum and have speech-like modulations that differ across frequency bands.  ICRA stimuli have advantages over modulated noise and sine wave stimuli in that they share some similar characteristics with speech, but they lack speech-like comodulation characteristics (e.g., fundamental frequency). Furthermore, ICRA stimuli are often classified by signal processing algorithms as “noise” rather than “speech”, so they are less than optimal for measuring how hearing aids process speech.

The European Hearing Instrument Manufacturers Association (EHIMA) is developing a new measurement procedure for nonlinear, adaptive hearing instruments and an important part of their initiative is development of a standardized test signal or International Speech Test Signal (ISTS).  The development and analysis of the ISTS was described in a paper by Holube, et al. (2010).

There were fifteen articulated requirements for the ISTS, based on available test signals and knowledge of natural speech, the most clinically salient of which are:

  • The ISTS should resemble normal speech but should be non-intelligible.
  • The ISTS should be based on six major languages, representing a wide range of phonological structures and fundamental frequency variations.
  • The ISTS should be based on female speech and should deviate from the international long-term average speech spectrum (ILTASS) for females by no more than 1dB.
  • The ISTS should have a bandwidth of 100 to 16,000Hz and an overall RMS level of 65dB.
  • The dynamic range should be speech-like and comparable to published values for speech (Cox et al., 1988; Byrne et al., 1994).
  • The ISTS should contain voiced and voiceless components. Voiced components should have a fundamental frequency characteristic of female speech.
  • The ISTS should have short-term spectral variations similar to speech (e.g., formant transitions).
  • The ISTS should have modulation characteristics similar to speech (Plomp, 1984).
  • The ISTS should contain short pauses similar to natural running speech.
  • The ISTS stimulus should have a 60 second duration, from which other durations can be derived.
  • The stimulus should allow for accurate and reproducible measurements regardless of signal duration.

Twenty-one female speakers of six different languages (American English, Arabic, Mandarin, French, German and Spanish) were recorded while reading a story, the text and translations of which came from the Handbook of the International Phonetic Association (IPA).  One recording from each language was selected based on a number of criteria including voice quality, naturalness and median fundamental frequency. The recordings were filtered to meet the ILTASS characteristics described by Byrne et al. (1994) and were then split into 500ms segments that roughly corresponded to individual syllables. These syllable-length segments were attached in pseudo-random order to generate sections of 10 or 15 milliseconds. Each of the resulting sections could be combined to generate different durations of the ISTS stimulus and no single language was used more than once in any 6-segment section.  Speech interval and pause durations were analyzed to ensure that ISTS characteristics would closely resemble natural speech patterns.

For analysis purposes, a 60-second ISTS stimulus was created by concatenation of 10- and 15-second sections.  This ISTS stimulus was measured and compared to natural speech and ICRA-5 stimuli based on several criteria:

  • Long-term average speech spectrum (LTASS)
  • Short term spectrum
  • Fundamental frequency
  • Proportion of voiceless segments
  • Band-specific modulation spectra
  • Comodulation characteristics
  • Pause and speech duration
  • Dynamic range (spectral power level distribution)

On all of the analysis criteria, the ISTS stimulus resembled natural speech stimuli as well or better than ICRA-5 stimuli. Notable improvements for the ISTS over the ICRA-5 stimulus were its comodulation characteristics and dynamic range of 20-30dB, as well as pauses and combinations of voiced and voiceless segments that more closely resembled the distributions in natural speech.  Overall, the ISTS was deemed an appropriate speech-like stimulus proposal for the new standard measurement protocol.

Following the detailed analysis, the ISTS stimulus was used to measure four different hearing instruments, which were programmed to fit a flat, sensorineural hearing loss of 60dBHL.  Each instrument was nonlinear with adaptive noise reduction, compression and feedback management characteristics. The first-fit algorithms from each manufacturer were used, with all microphones fixed to an omnidirectional mode.  Instead of yielding gain and output measurements across frequency for one input level, the results showed percentile dependent gain (99th, 65th and 30th) across frequency as referenced to the long-term average speech spectrum.  The percentile dependent gain values provided information about nonlinearity, in that the softer components of speech were represented by the 30th percentile, moderate and loud speech components were represented by the 65th and 99th percentiles, respectively.  Relations between these three percentiles represented the differences in gain for soft, moderate and loud sounds.

The measurement technique described by Holube and colleagues, using the ISTS stimulus, offers significant advantages over current measurement protocols with standard sine wave or noise stimuli. First and perhaps most importantly, it allows hearing instruments to be programmed to real-life settings with adaptive signal processing features active. It measures how hearing aids process a stimulus that very closely resembles natural speech, so clinical verification measures may provide more meaningful information about everyday performance. By showing changes in percentile gain values across frequency, it also allows compression effects to be directly visible and may be used to evaluate noise reduction algorithms as well. The authors also note that the acoustic resemblance of ISTS to speech with its lack of linguistic information may have additional applications for diagnostic testing, telecommunications or communication acoustics.

The ISTS is currently available in some probe microphone equipment and will likely be introduced in most commercially available equipment over the next few years. Its introduction brings a standardized speech stimulus, for the testing of hearing aids, to the clinic. An important component of clinical best practice is the measurement of a hearing aid’s response characteristics. This is most easily accomplished through insitu probe microphone measurement in combination with a speech test stimulus such as the ISTS.

References

American National Standards Institute (ANSI ). ANSI S3.22-2003. Specification of hearing aid characteristics. New York: Acoustical Society of America.

Byrne, D., Dillon, H., Tran, K., Arlinger, S. & Wibraham, K. (1994). An international comparison of long0term average speech spectra. Journal of the Acoustical Society of America, 96(4), 2108-2120.

Cox, R.M., Matesich, J.S. & Moore, J.N. (1988). Distribution of short-term rms levels in conversational speech. Journal of the Acoustical Society of America, 84(3), 1100-1104.

Dreschler, W.A., Verschuure, H., Ludvigsen, C. & Westerman, S. (2001). ICRA noises: Artificial noise signals with speech-like spectral and temporal properties for hearing aid assessment. Audiology, 40, 148-157.

Fastl, H. (1987). Ein Storgerausch fur die Sprachaudiometrie. Audiologische Akustik, 26, 2-13.

Holube, I., Fredelake, S., Vlaming, M. & Kollmeier, B. (2010). Development and analysis of an international speech test signal (ISTS). International Journal of Audiology, 49, 891-903.

International Electrotechnical Commission, 1994, IEC 60118-0. Hearing Aids: Measurement of electroacoustical characteristics, Bureau of the International Electrotechnical Commission, Geneva, Switzerland.

IPA, 1999. Handbook of the International Phonetic Association. Cambridge University Press.

Plomp, R. (1984). Perception of speech as a modulated signal. In M.P.R. van den Broeche, A. Cohen (eds), Proceedings of the 10th International Congress of Phonetic Sciences, Utrecht, Dordrecht: Foris Publications, 29-40.

 

 

 

Will placing a receiver in the canal increase occlusion?

The influence of receiver size on magnitude of acoustic and perceived measures of occlusion.

Vasil-Dilaj, K.A., & Cienkowski, K.M. (2010). The influence of receiver size on magnitude of acoustic and perceived measures of occlusion. American Journal of Audiology 20, 61-68.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

The occlusion effect, an increase in bone conducted sound when the ear canal is occluded, is a consideration for many hearing aid fittings.  The hearing aid shell or earmold restricts the release of low-frequencies from the ear canal (Revit, 1992), resulting in an increase in low-frequency sound pressure level at the eardrum, sometimes up to 25dB (Goldstein & Hayes, 1965; Mueller & Bright, 1996; Westermann, 1987).  Hearing aid users suffering from occlusion will complain of an “echo” or “hollow” quality to their voices and hearing their own chewing can be particularly annoying. Indeed, perceived occlusion is reported to be a common reason for dissatisfaction with hearing aids (Kochkin, 2000).

Occlusion from a hearing aid shell or earmold is usually managed by increasing vent diameter or decreasing the length of the vent in order to decrease the acoustic mass of the vent (Dillon, 2001; Kiessling, et al, 2005). One potential risk of increasing vent diameter is increased risk of feedback, but this problem has been alleviated by improvements in feedback cancellation. Better feedback management has also resulted in more widespread use of open fit, receiver-in-canal (RIC) instruments which have proven effective in reducing measured and perceived occlusion (Dillon, 2001; Kiessling et al., 2005; Kiessling et al., 2003; Vasil & Cienkowski, 2006).

Though open fit BTE hearing instruments are designed to be acoustically transparent, some open fittings still result in perceived occlusion.  Interestingly, perceived occlusion is not always strongly or even significantly correlated with measured acoustic occlusion (Kiessling et al., 2005; Kuk et al., 2005; Kampe & Wynne, 1996), so it is apparent that other factors do contribute to the perception of occlusion.  The size of the receiver and/or eartip, as well as the size of the ear canal, affect the amount of air flow in and out of the ear canal and it seems likely that these factors could affect the amount of acoustic and perceived occlusion.

Thirty adults, 17 men and 13 women, participated in the study. All had normal hearing, unremarkable otoscopic examinations and normal tympanograms. Two measures of ear canal volume were obtained: volume estimates from the tympanometry screener and estimates determined from earmold impressions that were sent to a local hearing aid manufacturer.  Participants were fitted binaurally with RIC hearing instruments.  Instead of domes used clinically with RIC instruments flexible receiver sleeves designed specifically for research purposes were used.  Use of the special receiver sleeves allowed the researchers to increase the overall circumference of the receiver systematically so that six receiver size conditions could be evaluated:  no receiver, receiver only (with a circumference of 0.149 in.), 0.170 in., 0.190 in., 0.210in. and 0.230 in.

Real-ear unoccluded and occluded measures were obtained with subjects vocalizing the vowel /i/. Subjects monitored the level of their vocalizations via a sound level meter. Real ear occlusion effect (REOE) was determined by subtracting the SPL levels for the unoccluded response from the occluded response (REOR-REUR = REOE).  Subjective measures were obtained by asking subjects to rate their perception of occlusion on a five point scale ranging from “no occlusion” to “complete occlusion”. To avoid bias in the occlusion ratings, participants were not allowed to view the hearing aids or receiver sleeves until after testing was completed.

Results indicated that measured acoustic occlusion was very low for all conditions, especially below 500Hz, where it was below 2dB for most of the receiver conditions. For frequencies above 500Hz, REOE increased as receiver size increased. The no receiver and receiver only conditions had the least amount of measured occlusion and the largest receiver sizes had the most. There was no significant interaction between receiver size and frequency.

Perceived occlusion also increased as receiver size increased and though it was mild for most participants in most of the conditions, for the largest receiver condition, some participants rated occlusion as severe. Perceived occlusion was not significantly correlated with measured acoustic occlusion for low frequencies, and the two measures were only weakly correlated for frequencies between 700-1500Hz.

There was no significant relationship between either measure of ear canal volume and perceived or acoustic measures of occlusion. However, adequate ear canal volume to accommodate all receiver sizes was an inclusion criterion for the study, so the authors suggest that smaller ear canal volume could still be a factor in perceived or acoustic occlusion and may warrant further study.

The results of the current investigation show that occlusion was minimal for most of the receiver sizes. These findings are in agreement with previous studies of vented hollow molds, completely open IROS shells (Vasil & Cienkowski, 2006), large 2.4mm vents and silicone ear tips (Kiessling et al, 2005). REOEs for the two largest receivers matched results for a hollow mold with 1mm vent (Kuk et al, 2009) and the REOEs for the two smallest receivers matched results for hollow molds with 2mm and 3mm vents (Kuk et al, 2009).  The authors also point out that there was minimal insertion loss for all conditions. Insertion loss from closed earmolds can amount to 20dBHL (Sweetow, 1991) and can contribute to a perception of occlusion or poor voice quality.  The relative lack of insertion loss is yet another potential advantage of open and RIC fittings.

Perception of occlusion did increase with the size of the receiver, but overall differences were small. This is in agreement with prior research suggesting that reduction of air flow out of the ear canal results in more low-frequency energy in the ear canal (Revit, 1992), which can cause an increase in occlusion (Dillon, 2001). The authors point out that although subjects were not able to see the receivers prior to insertion, they were probably aware of the size and weight differences and could have been influenced by the perception of a larger object in the ear as opposed to actual occlusion. This may also be the case for hearing aid users, perhaps particularly so for individuals with smaller or tortuous ear canals.

The occlusion effect can be challenging, especially when anatomical or other constraints result in the use of minimal venting for individuals with good low-frequency hearing. The results reported here suggest that acoustic occlusion with RIC instruments is slight and may not always be related to perceived occlusion. Therefore, a client’s perception of “hollow” voice quality, “echoey” sound quality or a plugged sensation may be the most reliable indication of occlusion and the most important determinant of eartip size or venting characteristics. The administration of an occlusion rating scale or other self-evaluation techniques may also prove helpful in evaluating occlusion and its impact on overall hearing aid satisfaction.

References

Dillon, H. (2001). Hearing aids. New York, NY: Thieme.

Goldstein, D.P.,  & Hayes, C.S. (1965). The occlusion effect in bone conduction hearing.  Journal of Speech and Hearing Research 8, 137-148.

Kampe, S.D., & Wynne, M.K. ( 1996). The influence of venting on the occlusion effect. The Hearing Journal 49(4), 59-66.

Kiessling, J., Brenner, B., Jespersen, C.T., Groth, J., & Jensen, O.D. (2005). Occlusion effect of earmolds with different venting systems. Journal of the American Academy of Audiology, 16, 237-249.

Kiessling. J., Margolf-Hackl, S., Geller, S., & Olsen, S.O. (2003). Researchers report on a field test of a non-occluding hearing instrument. The Hearing Journal , 56(9), 36-41.

Kochkin, S. (2000). MarkeTrak V: Why my hearing aids are in the drawer: The consumer’s perspective. The Hearing Journal 53 (2), 34-42.

Kuk, F.K. , Keenan, D., & Lau, C.C. (2005). Vent configurations on subjective and objective occlusion effect. Journal of the American Academy of Audiology 16, 747-762.

Mueller, H.G., & Bright, K.E. (1996). The occlusion effect during probe microphone measurements. Seminars in Hearing 17 (1), 21-32.

Revit, L. (1992). Two techniques for dealing with the occlusion effect. Hearing Instruments 43 (12), 16-18.

Sweetow, R. W. (1991). The truth behind “non-occluding” earmolds. Hearing Instruments 42 (1), 25.

Vasil, K.A., & Cienkowski, K.M. (2006). Subjective and objective measures of the occlusion effect for open-fit hearing aids. Journal of the Academy of Rehabilitative Audiology 39, 69-82.

Vasil-Dilaj, K.A., & Cienkowski, K.M. (2010). The influence of receiver size on magnitude of acoustic and perceived measures of occlusion. American Journal of Audiology 20, 61-68.

Westermann, V.H. (1987). The occlusion effect. Hearing Instruments, 38 (6), 43.

The DSL 5.0a is a successful fitting formula for adults

Fit to Targets, Preferred Listening Levels, and Self-Reported Outcomes for the DSL v5.0a Hearing Aid Prescription for Adults

Polonenko, M.J., Scollie, S.D., Moodie, S., Seewald, R.C., Laurnagaray, D., Shantz, J. & Richards, A. (2010) Fit to targets, preferred listening levels and self-reported outcomes for the DSL v5.0a hearing aid prescription for adults. International Journal of Audiology 49, 550-560.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

The importance of perceived benefit for successful hearing aid fittings is well established. According to two MarkeTrak studies by Sergei Kochkin (2005, 2007), perceived benefit was the number one factor contributing to hearing aid user satisfaction.  Similarly, the lack of benefit was the most commonly cited reason for hearing aid returns.  Perceived benefit from hearing aids may be determined by a number of factors, but the appropriateness of the individually fitted gain is one of the main contributors (Cox & Alexander, 1994).

The Desired Sensation Level (DSL) prescriptive method was originally developed for children and prescribes targets that are generally very close to children’s preferred listening levels. However, DSL v4.1 targets have been found to prescribe gain that is 9 to 11 dB greater than adult preferred listening levels (Scollie et al., 2005).  Therefore, DSL v5.0a was developed with lower perceived loudness levels, ones that more closely approximate the needs of adult hearing aid users.

The success of a hearing aid prescription can be measured in terms of clinical efficacy, or how closely the hearing aid settings achieve a desired clinical result or test outcome. One such measure is the Preferred Listening Level (PLL). The PLL is defined as “the sound pressure level at the eardrum that the person chooses or prefers for listening to hearing aid amplified speech”(Cox & Alexander, 1994) and represents a compromise between comfort, intelligibility, background noise and distortion (Cox, 1982).  One method of measuring the PPL is by instructing listeners to adjust the volume setting of their hearing instruments to the level that sounds best to them, as they listen to speech presented at a conversational level.

A related but different way to determine the success of a hearing aid fitting strategy is measure effectiveness, or how well hearing aid settings help the user function in real-world situations.  One commonly used measure of hearing aid effectiveness is the Client Oriented Scale of Improvement or COSI (Dillon et al., 1997).  On the COSI questionnaire, the hearing aid user lists up to 5 typical listening situations in which he struggles to hear or would like to hear better.  Following a period of acclimatization, they rate the degree of perceived change in these situations as well as their final ability to function in each situation.

Although the DSL v5.0a prescriptive method was specifically developed for adults with acquired hearing loss, there have been relatively few studies evaluating it. Therefore the current authors sought to determine the electroacoustic feasibility, clinical efficacy, and effectiveness with adult hearing aid users. They had three primary goals:

1.  To measure final fit versus targets in a clinical environment

2.  To evaluate the preferred listening levels (PLLs) of adults versus the DSL v5.0a targets

3.  To measure the effectiveness of the DSL v5.0a prescription as reported on the COSI

Thirty subjects with predominantly sensorineural hearing loss participated in the study. Nineteen were new hearing aid users and eleven were experienced hearing aid users. Twenty-four were fitted binaurally, six were monaural users. Subjects were fitted in private clinics and the audiologists were specifically instructed to program and adjust the instruments to meet the patients’ needs, rather than to meet prescriptive targets.

Hearing aid fittings were matched to DSL 5.0 prescribed targets and verified with simulated real ear measurements, to ensure consistency between test sites and to promote replicable measures. Hearing aids were set to their primary programs and were measured in 2cc couplers, after individual Real Ear to Coupler Differences (RECD) were measured.  Following electroacoustic measures, the aids were fitted to the patients’ ears and adjustments were made based on patients’ subjective satisfaction. These procedures were not carried out according to any protocol established by the authors; the audiologists conducted fine tuning adjustments as needed for each individual. After an approximately 30-day period, subjects returned to the clinics for fine tuning.  After a total acclimatization period of 90 days, preferred listening levels (PLLs) and COSI outcome evaluations were conducted.

Electroacoustic analyses revealed that the clinical fittings were significantly correlated with the DSL v5.0a targets.  Sixty-eight percent of initial fittings were within 2.9 to 4.2 dB of target and 95% were within 5.8 to 8.4 dB of target across frequencies. These results contrast with previous research using NAL-R and NAL-NL1 targets, in which initial fittings differed from targets by 10-15dB. (Sammeth, 1993; Aazh and Moore, 2007).

Preferred listening levels (PLLs) were compared to targets and initial fittings and differed by only about 2dB.  The DSL v5.0a targets were on average 2.6dB lower than PLLs and 1.95dB lower than initial fittings.  Furthermore, DSL v5.0a targets were significantly correlated with PLLs at all frequencies and the targets and PLLs did not differ significantly as a function of degree of hearing loss.  The authors noted a trend for higher PLLs than targets at 250Hz, indicating that some users preferred more low-frequency output than prescribed.

COSI ratings of real-world performance were obtained at the 90-day appointment. The top five situations in which subjects hoped to hear better were similar to those chosen by subjects in the COSI normative study (Dillon et al, 1999):

1.  conversation with a group in noise

2.  conversation with a group in quiet

3.  conversation with one or two partners in noise

4.  listening to the television or radio

5.  conversation with one or two partners in quiet

Subjects were asked to rate the degree of change in their hearing with amplification as well as the final hearing ability (or hearing aid performance) in these situations. Results indicated that they judged their hearing to be “better” or “much better” for 83% of the fittings, which compares well to the normative results obtained by Dillon et al. (1999) of 80%. For final hearing ability, 93% of the current respondents reported hearing 75% of the time (a COSI rating of 4 or better) as compared to 90% of the normative study participants.

The purpose of the current study was to determine if DSLv5.0a prescriptive targets, developed for adults, provided electroacoustically appropriate fittings and subjectively favorable real-world results.  Indeed, clinician-adjusted fittings were within 10 dB of prescriptive targets for 92% of the subjects.  Targets also closely approximated preferred listening levels, which is particularly important because prior studies showed DSL v4.1 targets were generally higher than adults’ preferred levels.  COSI measurements indicated positive ratings for benefit and communication performance which were similar or slightly better than those obtained for the normative population.

An incidental finding of the current study was that instruments with more than six channels of processing may meet prescriptive targets more accurately than those with only six channels.  This was not specifically studied in the current paper, but the authors provided a matrix of number of channels versus errors in matching to target, showing that instruments with more than six channels yielded fewer and smaller errors than those with only six channels of processing. This result is probably consistent with clinical observations, in which sophisticated hearing aid circuits with more channels of processing often provide better fittings than instruments with fewer channels.  The importance of this factor may depend on the client’s hearing loss.  Gently sloping audiometric configurations may generally require fewer channels to meet targets.

The current results show that in a group of adults preferred listening levels and positive real-world outcomes were achieved with programs matched to DSL v5.0a targets, at least in quiet situations. In noisy listening situations, participants may have accessed alternate memories with directionality and noise reduction, causing amplification characteristics to differ from DSL settings.  Even if this is the case, the current study shows that the DSL v5.0a prescriptive measure for adults yields a close approximation to patient preferred settings for a wide range of hearing losses.

References

Aazh, H. &Moore, B.C.J. (2007). The value of routine real ear measurement of the gain of digital hearing aids. Journal of the American Academy of Audiology 18, 653-664.

Cox, R.M. (1982). Functional correlates of electroacoustic performance data. In: G.A. Studebaker & F.H. Bess (eds.) The Vanderbilt Hearing Aid Report. Parkton, MD: York Press, pp. 78-84.

Cox, R.M. & Alexander, G.C. (1994). Prediction of hearing aid benefit: the role of preferred listening levels. Ear and Hearing 15(1), 22-29.

Dillon, H., James, A. & Ginis, J. (1997). Client Oriented Scale of Improvement (COSI) and its relationship to several other measures of benefit and satisfaction provided by hearing aids. Journal of the American Academy of Audiology 8, 27-43.

Dillon, H., Birtles, G. & Lovegrove, R. (1999). Measuring the outcomes of a National Rehabilitation Program: normative data for the Client Oriented Scale of Improvement (COSI) and the Hearing Aid User’s Questionnaire (HAUQ). Journal of the American Academy of Audiology 10, 67-79.

Kochkin, S. (2005). MarkeTrak VII: Customer satisfaction with hearing instruments in the digital age. Hearing Journal 58(9), 30-43.

Kochkin, S. (2008). MarkeTrak VII:  Obstacles to adult non-user adoption of hearing aids. Hearing Journal 60(4), 24-51.

Polonenko, M.J., Scollie, S.D., Moodie, S., Seewald, R.C., Laurnagaray, D., Shantz, J. & Richards, A. (2010) Fit to targets, preferred listening levels and self-reported outcomes for the DSL v5.0a hearing aid prescription for adults. International Journal of Audiology 49, 550-560.

Sammeth, C., Peek, B., Bratt, G., Bess, F. & Amberg, S. (1993). Ability to achieve gain/frequency response and SSPL-90 under three prescription formulas with in-the-ear hearing aids. Journal of the American Academy of Audiology 4, 33-41.

Scollie, S., Seewald, R., Cornelisse, L., Moodie, S., Bagatto, M., et al. (2005). The Desired Sensation Level Multistage Input/Output Algorithm. Trends in Amplification 4(9), 159-197.

Considerations for Directional Microphone Use in the Classroom

Directional Benefit in Simulated Classroom Environments

Ricketts, T., Galster, J. and Tharpe, A.M. (2007)

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Classroom acoustic environments vary widely and are affected by a number of factors including reverberation and noise from within the classroom and adjacent areas. Signal-to-noise ratio (SNR) is known to affect speech perception for children with normal hearing and those with hearing loss (Crandell, 1993; Finitzo-Hieber & Tillman, 1978). Because listeners with hearing loss typically require more favorable SNRs to achieve the same performance as normal hearing listeners, hearing-impaired students are particularly challenged by high levels of classroom noise.

FM systems are often recommended as a method for improving SNR in the classroom. However, they may not effectively convey voices other than the teacher’s, so children may be less able to hear comments or questions from other students.  The additional bulk of ear level FM systems may prompt reluctance to wear the FM system, as the student may perceive this as calling attention to their hearing loss.  Because of these and other potential limitations of FM systems, the use of hearing aids with directional microphones is an opportunity to improve SNR for hearing-impaired children.

The benefits of directional microphones for speech perception in the presence of background noise are well known for adults (Bentler, 2005; Ricketts & Dittberner, 2002; Ricketts, Henry & Gnewikow, 2003). Research has shown that children also benefit from directionality in laboratory conditions (Gravel, Fausel, Liskow & Chobot, 1999; Hawkins, 1984; Kuk, et al., 1999), but more information is needed on the effect of directional microphone use in classroom environments.  The study summarized in this post evaluated directional microphone use in simulated classroom situations and the subjective reaction to omnidirectional and directional modes by children and parents.

The authors recruited twenty-six hearing-impaired subjects ranging in age from 10-17 years participated in the experiment.  All but two had prior experience with hearing aids.  Subjects were fitted bilaterally with behind-the-ear hearing instruments that were programmed with omnidirectional and directional modes.  Digital noise reduction and feedback suppression features were disabled and all participants were fitted with unvented, vinyl, full-shell earmolds.

This study consisted of three individual experiments. The first investigated directional versus omnidirectional performance in noise in five simulated classroom scenarios:

1) Teacher Front – speech stimuli presented in front of the listener.

2) Teacher Back – speech presented behind the listener.

3) Desk Work – speech presented in front of the listener, the listener’s head oriented down toward desk

4) Discussion – three speech sources at 0 and 50-degree azimuth (left and right), simulating a round table discussion

5) Bench Seating – speech presented at 90-degree azimuth (left and right)

Speech recognition performance was evaluated in each of these scenarios using a modified version of the Hearing in Noise Test for Children (HINT-C, Nilsson, Soli & Sullivan, 1994). Speech stimuli were initially presented at 65dB SPL for the five test conditions. Noise was presented from four loudspeakers positioned 2 meters from each corner of the room. For conditions 1-3, the noise level was 55dB. For conditions 4 and 5, noise levels were fixed at 65dB.

A second experiment examined the performance of omnidirectional versus directional modes in the presence of multiple talkers. Monosyllabic words from the NU-6 lists (Tillman & Carhart, 1966) were randomly presented at 63dB SPL from speakers positioned 1.5 meters, surrounding the listener at three angles: 0 degrees (in front of listener), 135 degrees (back right) and 225 degrees (back left). Noise was presented at 57dB SPL, which again yielded an SNR of 6dB.

Not surprisingly, the results of the first experiment showed that directional performance was significantly better than omnidirectional performance for Teacher Front, Desk Work and Discussion conditions, but was significantly worse for the Teacher Back condition.  There was no significant difference between omnidirectional and directional modes for the Bench Seating condition.  In the Bench Seating condition, however, subjects were not specifically instructed to look at the speaker. If some subjects did look at the speaker and others did not, individual differences between omnidirectional and directional modes may have been obscured on average.  Improved performance was generally noted as the distance between speaker and listener decreased. This is consistent with previous studies with adult listeners, which showed increased directional benefit with decreasing distance (Ricketts & Hornsby, 2003, 2007).

The second experiment yielded no significant difference in performance between omnidirectional and directional modes when speech was in front of the listener. When speech was presented behind the listener, omnidirectional mode was significantly better than directional in both the back-right and back-left conditions.  The authors surmised that the directional benefit may have been reduced because subjects were told that all of the talkers were important and because 2/3 of the talkers were behind them, they may have been more focused on speech coming from the back.

The current study offers insight into the potential benefit of directional microphones for classroom environments. An FM system remains the primary recommendation for improving signal-to-noise ratio of a teacher’s voice, but overhearing other students and multiple talkers can be compromised by FM technology.  Additionally, because of social, cosmetic or financial concerns FM use may not be feasible for many students. Therefore, directional hearing instruments will likely continue to be widely recommended for hearing-impaired schoolchildren. This study reported a directional benefit ranging from 2.2 to 3.3 dB, which is consistent with studies of adult listeners (Ricketts, 2001).  Therefore, directional microphone use in classrooms may indeed be beneficial, as long as the teacher or speaker of interest is in front of the listener. However, for round table or small group arrangements, directionality could be detrimental, especially when talkers are behind the listener.  The authors point out that many school scenarios involve multiple talkers or speech from the sides and back, so directional microphone benefit may be limited overall.

The results of these experiments underscore the importance of counseling for school-age hearing aid users, as well as their parents and teachers. It is common practice to recommend preferential seating close to the teacher in the front of the classroom. Improved performance with decreases in distance from the speech source, in this and other studies, shows that this recommendation is particularly important for hearing aid users, whether or not they are in a directional mode. Furthermore, hearing-impaired students should be instructed to face the teacher so they can benefit from directional processing as well as visual cues. This should also be discussed in detail with teachers so that efforts can be made to arrange classroom seating accordingly.

An incidental finding of the first experiment showed that performance for the Desk Work condition was better than the Teacher Front condition, even though the distance between speaker and listener was comparable.  In the Desk Work condition, subjects were instructed to work on an assignment on the desk as they listened. Therefore, the listener’s head position was pointed slightly downward, which may have resulted in more optimal, horizontal positioning of the microphone ports, increasing directional effect. This finding demonstrates the importance of selecting the proper tubing or wire length, to position the hearing aid near the top of the pinna and align the microphone ports along the intended plane.

Overall, directional processing improved performance for speech sources in front of the listener and reduced performance for speech sources behind the listener. The instruments in this study were full-time omnidirectional or directional instruments, so it is unknown how automatic, adaptive directional instruments would perform under similar conditions. Because of the prevalence of automatic directionality in current hearing instruments, this is a question with important implications for school-age hearing aid users.  Perhaps automatic directionality could provide better overall access to speech in many classroom environments, but controlled study is needed before specific recommendations can be made.

References

Anderson, K.L. & Smaldino, J.J. (2000). The Children’s Home Inventory of Listening Difficulties. Retrieved from http://www.edaud.org.

Bentler, R.A. (2005). Effectiveness of directional microphones and noise reduction schemes in hearing aids: A systematic review of the evidence. Journal of the American Academy of Audiology, 16, 473-484.

Crandell, C. (1993). Speech recognition in noise by children with minimal degrees of sensorineural hearing loss. Ear and Hearing 14, 210-216.

Finitzo-Hieber, T. & Tillman, T. (1978). Room acoustics effects on monosyllabic word discrimination ability for normal and hearing-impaired children. Journal of Speech and Hearing Research, 21, 440-458.

Gravel, J., Fausel, N., Liskow, C. & Chobot, J. (1999). Children’s speech recognition in noise using omnidirectional and dual-microphone hearing aid technology. Ear and Hearing, 20, 1-11.

Hawkins, D.B. (1984). Comparisons of speech recognition in noise by mildly-to-moderately hearing-impaired children using hearing aids and FM systems. Journal of Speech and Hearing Disorders, 49, 409-418.

Kuk, F.K., Kollofski, C., Brown, S., Melum, A. & Rosenthal, A. (1999). Use of a digital hearing aid with directional microphones in school-aged children. Journal of the American Academy of Audiology, 10, 535-548.

Nilsson, M., Soli, S.D.  & Sullivan, J. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. The Journal of the Acoustical Society of America, 95, 1085-1099.

Resnick, S.B., Dubno, J.R., Hoffnung, S. & Levitt, H. (1975). Phoneme errors on a nonsense syllable test. The Journal of the Acoustical Society of America, 58, 114.

Ricketts, T., Lindley, G., & Henry, P (2001). Impact of compression and hearing aid style on directional hearing aid benefit and performance. Ear and Hearing, 22, 348-361.

Ricketts, T. & Dittberner, A.B. (2002). Directional amplification for improved signal-to-noise ratio: Strategies, measurement and limitations. In M. Valente (Ed.), Hearing aids: Standards, options and limitations (2nd ed., pp. 274-346). New York: Thieme Medical.

Ricketts, T., Galster, J. & Tharpe, A.M. (2007). Directional benefit in simulated classroom environments. American Journal of Audiology, 16, 130-144.

Ricketts, T., Henry, P. & Gnewikow, D. (2003). Full time directional versus user selectable microphone modes in hearing aids. Ear and Hearing, 24, 424-439.

Ricketts, T. & Hornsby, B.( 2003). Distance and reverberation effects on directional benefit. Ear and Hearing, 24, 472-484.

Ricketts, T. & Hornsby, B.(2007). Estimation of directional benefit in real rooms: A clinically viable method. In R.C. Seewald (Ed.), Hearing care for adults: Proceedings of the First International Conference (pp 195-206). Chicago: Phonak.

Tillman, T. & Carhart, R. (1966). An expanded test for speech discrimination using CNC monosyllables (Northwestern University Auditory Test No. 6) SAM-TB-66-55. Evanston, IL: Northwestern University Press.