Starkey Research & Clinical Blog

Effective communication behavior during hearing aid appointments

Munoz, K., Ong, C., Borrie, S., Nelson, L., & Twohig, M. (2017). Audiologists’ communication behavior during hearing device management appointments. International Journal of Audiology, Early Online, 1-9.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

The skill of the audiologist in communicating with a patient can significantly impact rehabilitative outcomes. Nowhere is this more evident than when an audiologist in engaged in managing a hearing device fitting. Studies have suggested a lack of patient-centeredness behavior by audiologists in audiologist-patient interactions, including domination of speaking time, a tendency to overemphasize the technical aspects of device care, interruptions of the patient, an inability to deal with emotion-laden aspects of rehabilitation, expressing empathy, and not actively listening, (e.g., Ekberg, 2014;  Grenness, et al, 2014; Grenness, et al, 2015; Knudsen, et, al., 2010; Laplante-Levesque, et al, 2014; Munoz, et al, 2014, and Munoz, et, al, 2015). The counseling tendencies noted above can create a lack of adherence to and understanding of the recommendations and information provided by the audiologist (Robinson, et al, 2008).

Audiologists in training are likely as not to internalize or imitate how their mentors or supervisors interact with patients. Unless their instructors have themselves achieved satisfactory interpersonal communication skills, audiologists may enter the workforce lacking practical counseling and communication skills that may diminish their effectiveness in the clinical setting.

The authors designed this exploratory, longitudinal study to measure audiologist communication behaviors at three time intervals, first, prior to participating in a one-day pre-training workshop, second, at a two-month interval, and third, at a six-month interval. The pre-training workshop focused on the psychosocial aspects of counseling including the use of open-ended questions, validation of emotions, reframing and clarifying patient problems and complaints, methods for increasing motivation, and double-checking patient assumptions. In addition, five one-hour support sessions were offered to the audiologists for a three-month period following the initial workshop, during which topics were discussed such as addressing client barriers, addressing emotions, being present and non-judgmental, and developing reflection/summarizing skills, among others. Attendance ranged from 30% to 90% of participants; one audiologist attended none of the support sessions, but most attended 3-4 sessions.

Ten audiologists actively providing clinical services were evaluated on two rating scales—1) the Behavior Competencies Rating Scale (a 10-item self-rating measure developed by the authors) designed to evaluate the audiologist’s own perception of his/her communication skills, and 2) a modified version of the Counseling Competencies Scale (Swank, et al, 2012), intended to measure counseling skills and behaviors, graded by both the instructor and independently by a psychology graduate student. 53 patients consented to participate and each audiologist-patient interaction was recorded. A set of coding guidelines was developed to recognize and categorize by type the counselling behaviors (interactions) exhibited by the audiologist, as well as the frequency of each of the counseling behaviors. The coding categories for counseling skills included encouragers, questions, listening and reflecting feelings, confrontations, goal setting, focus of counseling, and expressions of appropriate empathy, care, respect and unconditional positive regard.

The article gave examples of expressions and statements during counseling that would fall into  specific coding categories. For example, an open-ended question such as “What do you think is the most challenging part of wearing (or taking care) of your hearing aids?” would be categorized as assessing and addressing barriers and motivation. An audiologist might comment to a patient who mentions they are in the process of moving, “So you have a lot going on,” which would be interpreted as an instance of listening and reflection.  Or the audiologist might suggest, “For homework, I’d like you to work on using a couple of the strategies we discussed,” a statement that would fall into the category of planning for behavior change.

The average length of each recorded counseling session was 46 minutes, from which a selected ten-minute sample was extracted, coded and subjected to analysis. The rate of change of audiologist behaviors, expressed as the percentage frequency of occurrence per session, was measured at the three time intervals mentioned above, baseline, one-month post-training, and at a six-month follow-up.

The authors found that audiologists devoted the greatest amount of clinical interactions throughout the six-month period to general fitting discussions followed by educational and technical instruction. The frequencies of occurrence (interactions) devoted to these two variables increased slightly post workshop, but thereafter decreased. The fewest number of the clinicians’ interactions per session over the six-month period was spent in listening and reflection, clarifying treatment goals, assessing and addressing motivation and barriers, and discussing behavior changes. Although small changes were noted in the frequencies of occurrence of these behaviors over the study period, the authors concluded that the observed changes were so minimal as not to be practically meaningful. Of interest, they also found the time per session devoted to irrelevant conversation and small talk increased linearly from a relatively low point to a higher level throughout the time of the study.

A striking outcome was the significant reduction in personal speaking time of audiologists following a pre-training workshop. When the speaking time of both patients and audiologists were compared (audiologists dominated during pre-training) both were approximately equal after the workshop. Although speaking time was not explicitly stressed in the workshop, these findings suggest a reduction in audiologist verbal dominance after training, suggesting that the training positively impacted this counseling behavior.

Finally, the audiologists, in rating their personal communication behaviors, perceived a marked improvement in their own communication skills on the self-rating scale. This improvement was not entirely supported by the data, as the observer-rated data showed little clinically important changes in psychologically relevant interactions over the study period.

The authors suggest that one of the reasons for lack of meaningful change in clinician communication behavior might have been the complexity of counseling skills taught within a relatively short time frame. The provision of a short workshop on communication skills is insufficient and that the importance of teaching patient-centered communication skills to audiologists-in-training as early as possible cannot be overstated.

Although there was evidence of improvement in audiologists’ counseling skills following the pre-training workshop and with supplementary instruction, it was limited. Hesitation to address patients’ psychosocial concerns, express empathy when appropriate, and address client’s emotions, indicate a possible gap in training and education. The authors recommend that clinical supervisors should be aware of the critical role patient-centered counselling plays in providing positive clinical outcomes. Further, these supervisors should recognize within themselves the need for improving personal counseling skills by furthering their own continuing education.


Ekberg, K., Grenness, C. & Hickson, L. (2014). Addressing patients’ psychosocial concerns regarding hearing aids within audiology appointments for older adults. American Journal of Audiology, 23, 337-350.

Grenness, C., Hickson, L., Laplante-Levesque, A., Meyer. C., & Davidson, B (2014). Communication patterns in audiologic rehabilitation history-taking: audiologists, patients, and their companions. Ear and Hearing, 36, 191-204.

Grenness, C., Hickson, L., Laplante-Levesque, A., Meyer. C., & Davidson, B (2015). The nature of communication throughout diagnosis and management planning in initial audiologic rehabilitation consultations. Journal of American Academy of Audiology, 50, 36-50.

Knudsen, L.V., Oberg, M., Nielsen, C., Naylor, G., & Kramer, S.E. (2010). Factors influencing help seeking, hearing aid uptake, hearing aid use and satisfaction with hearing aids: a review of the literature. Trends in Hearing, 14, 127-154.

Laplante-Levesque, A., Hickson, L., & Grenness, C. (2014). An Australian survey of audiologists’ preference for patient-centeredness. International Journal of Audiology, 53, S76-S82.

Munoz, K., Nelson, L., Blaiser, K., Price, T., & Twohig, M. (2015). Improving support for parents of children with hearing loss: provider training on use of targeted communications.

Munoz, K., Preston, E., & Hickens, S. (2014). Pediatric hearing aid use: how can audiologists support parents to increase consistency. Journal of the American Academy of Audiology, 25, 380-387.

Robinson, J.H., Callister, L.C., Berry, J.A., & Dearing, K.A. (2008). Patient-centered care and adherence: definitions and applications to improve outcomes. Journal of the American Academy of Nurse Practitioners, 20, 600-607

Swank, J.M., Lambie, G.W., & Witta, E. L. (2012). An exploratory investigation of the Counseling Competencies Scale: a measure of counseling skills, dispositions, and behaviors. Counselor Education and Supervision, 51, 189-206.

A Pediatric Prescription for Listening in Noise

Crukley, J. & Scollie, S. (2012). Children’s speech recognition and loudness perception with the Desired Sensation Level v5 Quite and Noise prescriptions. American Journal of Audiology 21, 149-162.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Most hearing aid prescription formulas attempt to balance audibility of sound with perception of loudness, while keeping the amplified sound within a patient’s dynamic range (Dillon, 2001; Gagne et al., 1991a; Gagne et al., 1991b; Seewald et al., 1985). Use of a prescriptively appropriate hearing aid fitting is particularly important for children with hearing loss. For the needs of language development, they benefit from a higher proportion of audible sound and broader bandwidth than diagnostically similar older children and adults (Pittman & Stelmachowicz, 2000; Stelmachowicz et al., 2000; Stelmachowicz et al., 2001; Stelmachowicz et al., 2004; Stelmachowicz et al., 2007).

Historically, provision of access to speech in quiet has been a primary driver in the development of prescription formulas for hearing aid.  However, difficulty understanding speech in noise is one of the primary complaints of all hearing aid users, including children. In a series of studies compared NAL-NL1 and DSL v4.1 fittings and examined children’s listening needs and preferences (Ching et al., 2010; Ching et al., 2010; Scollie et al., 2010) two distinct listening categories were identified: loud, noisy and reverberant environments and quiet or low-level listening situations. The investigators found that children preferred the DSL fitting in quiet conditions but preferred the NAL fitting for louder sounds and when listening in noisy environments. Examination of the electroacoustic differences between the two fittings showed that the DSL fittings provided more gain overall and approximately 10dB more low-frequency gain than the NAL-NL1 fittings.

To address the concerns of listening in noisy and reverberant conditions, DSL v5 includes separate prescriptions for quiet and noise. Relative to the formula for quiet conditions, the noise formula prescribes higher compression thresholds, lower overall gain, lower low-frequency gain and more relative gain in the high frequencies.  This study of Crukley and Scollie addressed whether the use of the DSL v5 Quiet and Noise formulae resulted in differences in consonant recognition in quiet, sentence recognition in noise and different loudness ratings.  Because of the lower gain in the noise formula, it was expected to reduce loudness ratings and consonant recognition scores in quiet because of potentially reduced audibility. There was no expected difference for speech recognition in noise, as the noise floor was considered the primary limitation to audibility in noisy conditions.

Eleven children participated in the study; five elementary school children with an average age of 8.85 years and six high school children with an average age of 15.18 years. All subjects were experienced, full-time hearing aid users with congenital, sensorineural hearing losses, ranging from moderate to profound.  All participants were fitted with behind-the-ear hearing aids programmed with two separate programs: one for DSL Quiet targets and one for DSL Noise targets. The Noise targets had, on average, 10dB lower low-frequency gain and 5dB lower high-frequency gain, relative to the Quiet targets. Testing took place in two classrooms: one at the elementary school and one at the high school.

Consonant recognition in quiet conditions was tested with the University of Western Ontario Distinctive Features Differences Test (UWO-DFD; Cheesman & Jamieson, 1996). Stimuli were presented at 50dB and 70dB SPL, by a male talker and a female talker. Sentence recognition in noise was performed with the Bamford-Kowal-Bench Speech in Noise Test (BKB-SIN; Niquette et al., 2003). BKB-SIN results are scored as the SNR at which 50% performance can be achieved (SNR-50).

Loudness testing was conducted with the Contour Test of Loudness Perception (Cox et al., 1997; Cox & Gray, 2001), using BKB sentences presented in ascending then descending steps of 4dB from 52dB to 80dB SPL. Subjects rated their perceived loudness on an 8-point scale ranging from “didn’t hear it” up to “uncomfortably loud” and indicated their response on a computer screen. Small children were assisted by a researcher, using a piece of paper with the loudness ratings, and then the researcher entered the response.

The hypotheses outlined above were generally supported by the results of the study. Consonant recognition scores in quiet were better at 70dB than 50dB for both prescriptions and there was no significant difference between the Quiet and Noise fittings. There was, however, a significant interaction between prescription and presentation level, showing that performance for the Quiet fittings was consistent at the two levels but was lower at 50dB than 70dB for the Noise fittings. The change in score from Quiet to Noise at 50dB was 4.2% on average, indicating that reduced audibility in the Noise fitting may have adversely affected scores at the lower presentation level. On the sentence recognition in noise test, BKB-SIN scores did not differ significantly between the Quiet and Noise prescriptions, with some subjects scoring better in the Quiet program, some scoring better in the Noise program and most not demonstrating any significant difference between the two. Loudness ratings were lower on average for the Noise prescription. When ratings for 52-68dB SPL and 72-80dB SPL were analyzed separately, there was no difference between the Quiet and Noise prescriptions for the lower levels but at 72dB and above, the Noise prescription yielded significantly lower loudness ratings.

Although the average consonant recognition scores for the Noise prescription were only slightly lower than those for the Quiet prescription, it may not be advisable to use the Noise prescription as the primary program for regular daily use, because of the risk of reduced audibility. This is especially true for pediatric hearing aid patients, for whom maximal audibility is essential for speech and language development. Rather, the Noise prescription is better used as an alternate program, to be accessed manually by the patient, teacher or caregiver, or via automatic classification algorithms within the hearing aid. Though the Noise prescription did not improve speech recognition in noise, it did not result in a decrement in performance and yielded lower loudness ratings, suggesting that in real world situations it would improve comfort in noise while still maintaining adequate speech intelligibility.

Many audiologists find that patients prefer a primary program set to a prescriptive formula (DSL v5, NAL-NL2 or proprietary targets) for daily use but appreciate a separate, manually accessible noise program with reduced low-frequency gain and increased noise reduction. This is true even for the majority of patients who have automatically switching primary programs, with built-in noise modes. Anecdotal remarks from adult patients using manually accessible noise programs agree with the findings of the present study, in that most people use them for comfort in noisy conditions and find that they are still able to enjoy conversation.

For the pediatric patient, prescription of environment specific memories should be done on a case-by-case basis. Patients functioning as teenagers might be capable of managing manual selection of a noise program in appropriate conditions. Those of a functionally younger age will require assistance from a supervising adult. Personalized, written instructions will assist adult caregivers to ensure that they understand which listening conditions may be uncomfortable and what actions should be taken to adjust the hearing aids. Most modern hearing aids feature some form of automatic environmental classification: ambient noise level estimation being one of the more robust classifications. Automatic classification and switching may be sufficient to address concerns of discomfort. However, the details of this behavior vary greatly among hearing aids. It is essential that the prescribing audiologist is aware of any automatic switching behavior and works to verify each of the accessible hearing aid memories.

Crukley and Scollie’s study supports the use of separate programs for everyday use and noisy conditions and indicates that children could benefit from this approach. The DSL Quiet and Noise prescriptive targets offer a consistent and verifiable method for this approach with children, while also providing potential guidelines for designing alternate noise programs for use by adults with hearing aids.



Cheesman, M. & Jamieson, D. (1996). Development, evaluation and scoring of a nonsense word test suitable for use with speakers of Canadian English. Canadian Acoustics 24, 3-11.

Ching, T., Scollie, S., Dillon, H. & Seewald, R. (2010). A crossover, double-blind comparison of the NAL-NL1 and the DSL v4.1 prescriptions for children with mild to moderately severe hearing loss. International Journal of Audiology 49 (Suppl. 1), S4-S15.

Ching, T., Scollie, S., Dillon, H., Seewald, R., Britton, L. & Steinberg, J. (2010). Prescribed real-ear and achieved real life differences in children’s hearing aids adjusted according to the NAL-NL1 and the DSL v4.1 prescriptions. International Journal of Audiology 49 (Suppl. 1), S16-25.

Cox, R., Alexander, G., Taylor, I. & Gray, G. (1997). The contour test of loudness perception. Ear and Hearing 18, 388-400.

Cox, R. & Gray, G. (2001). Verifying loudness perception after hearing aid fitting. American Journal of Audiology 10, 91-98.

Crandell, C. & Smaldino, J. (2000). Classroom acoustics for children with normal hearing and hearing impairment. Language, Speech and Hearing Services in Schools 31, 362-370.

Crukley, J. & Scollie, S. (2012). Children’s speech recognition and loudness perception with the Desired Sensation Level v5 Quite and Noise prescriptions. American Journal of Audiology 21, 149-162.

Dillon, H. (2001). Prescribing hearing aid performance. Hearing Aids (pp. 234-278). New York, NY: Thieme.

Jenstad, L., Seewald, R., Cornelisse, L. & Shantz, J. (1999). Comparison of linear gain and wide dynamic range compression hearing aid circuits: Aided speech perception measures. Ear and Hearing 20, 117-126.

Niquette, P., Arcaroli, J., Revit, L., Parkinson, A., Staller, S., Skinner, M. & Killion, M. (2003). Development of the BKB-SIN test. Paper presented at the annual meeting of the American Auditory Society, Scottsdale, AZ.

Pittman, A. & Stelmachowicz, P. (2000). Perception of voiceless fricatives by normal hearing and hearing-impaired children and adults. Journal of Speech, Language and Hearing Research 43, 1389-1401.

Scollie, S. (2008). Children’s speech recognition scores: The speech intelligibility index and proficiency factors for age and hearing level. Ear and Hearing 29, 543-556.

Scollie, S., Ching, T., Seewald, R., Dillon, H., Britton, L., Steinberg, J. & Corcoran, J. (2010). Evaluation of the NAL-NL1 and DSL v4.1 prescriptions for children: Preference in real world use. International Journal of Audiology 49 (Suppl. 1), S49-S63.

Scollie, S., Ching, T., Seewald, R., Dillon, H., Britton, L., Steinberg, J. & King, K. (2010). Children’s speech perception and loudness ratings when fitted with hearing aids using the DSL v4.1 and NAL-NL1 prescriptions. International Journal of Audiology 49 (Suppl. 1), S26-S34.

Seewald, R., Ross, M. & Spiro, M. (1985). Selecting amplification characteristics for young hearing-impaired children. Ear and Hearing 6, 48-53.

Stelmachowicz, P., Hoover, B., Lewis, D., Kortekaas, R. & Pittman, A. (2000). The relation between stimulus context, speech audibility and perception for normal hearing and hearing impaired children. Journal of Speech, Language and Hearing Research 43, 902-914.

Stelmachowicz, P., Pittman, A., Hoover, B. & Lewis, D. (2001). Effect of stimulus bandwidth on the perception of /s/ in normal and hearing impaired children and adults. The Journal of the Acoustical Society of America 110, 2183-2190.

Stelmachowicz, P. Pittman, A., Hoover, B. & Lewis, D. (2004). Novel word learning in children with normal hearing and hearing loss. Ear and Hearing 25, 47-56.

Stelmachowicz, P. Pittman, A., Hoover, B., Lewis, D. & Moeller, M. (2004). The importance of high-frequency audibility in the speech and language development of children with hearing loss. Archives of Otolaryngology, Head and Neck Surgery 130, 556-562.

Stelmachowicz, P., Lewis, D., Choi, S. & Hoover, B. (2007).  Effect of stimulus bandwidth on auditory skills in normal hearing and hearing impaired children.  Ear and Hearing 28, 483-494.

On the Prevalence of Cochlear Dead Regions

Pepler, A., Munro, K., Lewis, K. & Kluk, K. (2014). Prevalence of Cochlear Dead Regions in New Referrals and Existing Adult Hearing Aid Users. Ear and Hearing 20(10), 1-11.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Cochlear dead regions are areas in which, due to inner hair cell and/or nerve damage, responses to acoustic stimuli occur not at the area of peak basilar membrane stimulation but instead occur at adjacent regions in the cochlea. Professor Brian Moore defined dead regions as a total loss of inner hair cell function across a limited region of the basilar membrane (Moore, et al., 1999b). This hair cell loss does not result in an inability to perceive sound at a given frequency range, rather the sound is perceived via off-place or off-frequency listening, a spread of excitation to adjacent regions in the cochlea where inner hair cells are still functioning (Moore, 2004).  Because the response is spread across a broad tonotopic area, individuals with cochlear dead regions may perceive pure tones as “clicks”, “buzzes” or “whooshes”.

Cochlear dead regions are identified and measured by a variety of masking techniques. The most accurate method is the calculation of psychophysical tuning curves (PTCs), originally developed to measure frequency selectivity (Moore & Alcantara 2001). A PTC plots the level required to mask a stimulus frequency as a function of the masker frequency. For a normally hearing ear, the PTC peak will align with the point at which the stimulus can be masked by the lowest level masker.  In ears with dead regions, the tip of the PTC is shifted off of the signal frequency to indicate that the signal is being detected in an adjacent region. Though PTCs are an effective method of identifying and delineating the edges of cochlear dead regions, they are time consuming and ill-suited to clinical use.

The test used most frequently for clinical identification of cochlear dead regions is the Threshold Equalizing Test (TEN; Moore et al., 2000; 2004). The TEN test was developed with the idea that tones detected by off-frequency listening, in ears with dead regions, should be easier to mask with broadband noise than they would in ears without dead regions. With the TEN (HL) test, masked thresholds are measured across the range of 500Hz to 4000Hz, allowing the approximate identification of a cochlear dead region.

There are currently no standards for clinical management of cochlear dead regions. Some reports suggest that affect speech, pitch, loudness perception, and general sound quality (Vickers et al., 2001; Baer et al., 2002; Mackersie et al., 2004; Huss et al., 2005a; 2005b). Some researchers have specified amplification characteristics to be used with patients with diagnosed dead regions, but there is no consensus and different studies have arrived at conflicting recommendations. While some recommend limiting amplification to a range up to 1.7 times the edge frequency of the dead region (Vickers et al., 2001; Baer et al., 2002), others advise the use of prescribed settings and recommend against limiting high frequency amplification (Cox et al., 2012; see link for a review).  Because of these conflicting recommendations, it remains unclear how clinicians should modify their treatment plans, if at all, for hearing aid patients with dead regions.

Previous research on the prevalence of dead regions has reported widely varying results, possibly due to differences in test methodology or subject characteristics. In a study of hearing aid candidates, Cox et al. (2011) reported a dead region prevalence of 31%, but their strict inclusion criteria likely missed individuals with milder hearing losses, so their prevalence estimate may be different from that of hearing aid candidates at large. Vinay and Moore (2007) reported higher prevalence of 57% in a study that did include individuals with thresholds down to 15dB HL at some frequencies, but the median hearing loss of their subjects was higher than that of the Cox et al. study, which likely impacted the higher prevalence estimate in their subject group.

In the study being reviewed, Pepler and her colleagues aimed to determine how prevalent cochlear dead regions are among a population of individuals who have or are being assessed for hearing aids. Because dead regions become more likely as hearing loss increases, and established hearing aid patients are more likely to have greater degrees of hearing loss, they also investigated whether established hearing aid patients would be more likely to have dead regions than newly referred individuals.  Finally, they studied whether age, gender, hearing thresholds or slope of hearing loss could predict the presence of cochlear dead regions.

The researchers gathered data from a group of 376 patients selected from the database of a hospital audiology clinic in Manchester, UK. Of the original group, 343 individuals met inclusion criteria; 193 were new referrals and 150 were established patients and experienced hearing aid users.  Of the new referrals, 161 individuals were offered and accepted hearing aids, 16 were offered and declined hearing aids and 16 were not offered hearing aids because their losses were of mild degree.  The 161 individuals who were fitted with new hearing aids were referred to as “new” hearing aid users for the purposes of the study. All subjects had normal middle ear function and otoscopic examinations and on average had moderate sensorineural hearing losses.

When reported as a proportion of the total subjects in the study, Pepler and her colleagues found dead region prevalence of 36%.  When reported as the proportion of ears with dead regions, the prevalence was 26% indicating that some subjects had dead regions in one ear only. Follow-up analysis on 64 patients with unilateral dead regions revealed that the ears with dead regions had significantly greater audiometric thresholds than the ears without dead regions. Only 3% of study participants had dead regions extending across at three or more consecutive test frequencies. Ears with contiguous dead regions had greater hearing loss than those without.  Among new hearing aid users, 33% had dead regions while the prevalence was 43% among experienced hearing aid users. On average, the experienced hearing aid users had poorer audiometric thresholds on average than new users.

Pepler and colleagues excluded hearing losses above 85dB HL because effective TEN masking could not be achieved. Therefore, dead regions were most common in hearing losses from 50 to 85dB HL, though a few were measured below that range. There were no measurable dead regions for hearing thresholds below 40dB HL. Ears with greater audiometric slopes were more likely to have dead regions, but further analysis revealed that only 4 kHz thresholds had a significant predictive contribution and the slopes of high-frequency hearing loss only predicted dead regions because of the increased degree of hearing loss at 4 kHz.

Demographically, more men than women had dead regions in at least one ear, but their audiometric configurations were different: women had poorer low frequency thresholds whereas men had poorer high frequency thresholds. It appears that the gender effect actually due to the difference in audiometric configuration, specifically the men’s poorer high frequency thresholds. A similar result was reported for the analysis of age effects. Older subjects had a higher prevalence of dead regions but also had significantly poorer hearing thresholds.  Though poorer hearing thresholds at 4kHz did slightly increase the likelihood of dead regions, regression analysis of the variables of age, gender and hearing thresholds found that none of these factors were significant predictors.

Pepler et al’s prevalence data agree with the 31% reported by Cox et al (2012), but are lower than that reported by Vinay and Moore (2007), possibly because the subjects in the latter study had greater average hearing loss than those in the other studies.  But when Pepler and her colleagues used similar inclusion criteria to the Cox study, they found a prevalence of 59%, much higher than the report by Cox and her colleagues and likely due to the exclusion of subjects with normal low frequency hearing in the Cox study. The authors proposed that Cox’s exclusion of subjects with normal low frequency thresholds could have reduced the overall prevalence by increasing the proportion of subjects with metabolic presbyacusis and eliminating some subjects with sensory presbyacusis—sensory presbyacusis is often associated with steeply sloping hearing loss and involves atrophy of cochlear structures (Shuknecht, 1964).

 In summary:

The study reported here shows that roughly a third of established and newly referred hearing aid patients are likely to have at least one cochlear dead region, in at least one ear. A very low proportion (3% reported here) of individuals are likely to have dead regions spanning multiple octaves. The only factor that predicted the presence of dead regions was hearing threshold at 4 kHz.

On the lack of clinical guidance:

As more information is gained about prevalence and risk factors, what remains missing are clinical guidelines for management of hearing aid users with diagnosed high-frequency dead regions. Conflicting recommendations have been proposed for either limiting high frequency amplification or preserving high frequency amplification and working within prescribed targets. The data available today suggest that prevalence of contiguous multi-octave dead regions is very low and a further subset of hearing aid users with contiguous dead regions experience any negative effects of high-frequency amplification. With consideration to these observations, it seems prudent that the prescription of high-frequency gain should adhere to the prescribed targets for all patients at the initial fitting. Any reduction to high-frequency gains should be managed as a result of subjective feedback from the patient after they have completed a trial period with their hearing aids.

On frequency lowering and dead regions:

Some clarity is required regarding the role of frequency lowering and the treatment of cochlear dead regions. Because acoustic information in speech extends out to 10 kHz and because most hearing aid frequency responses roll off significantly after 4-5 kHz, the mild prescription of frequency lowering can be beneficial to many hearing aid users. It must be noted that the benefits of this technology arise largely from the acoustic limitations of the device and not the presence or absence of a cochlear dead region. There are presently no recommendations for the selection of frequency lowering parameters in cases of cochlear dead regions. In the absence of these recommendations, the best practice for the prescription of frequency lowering would follow the same guidelines as any other patient with hearing loss; validation and verification should be performed to document benefit with the algorithm and identify appropriate selection of algorithm parameters.

On the low-frequency dead region: 

The effects of low-frequency dead regions are not well studied and may have more significant impact on hearing aid performance.  Hornsby (2011) reported potential negative effects of low frequency amplification if it extends into the range of low-frequency dead regions (Vinay et al., 2007; 2008). In some cases performance decrements reached 30%, so the authors recommended using low-frequency gain limits of 0.57 times the low-frequency edge of the dead region in order to preserve speech recognition ability. Though dead regions are less common in the low frequencies than in the high frequencies, more study on this topic is needed to determine clinical testing and treatment implications.


Baer, T., Moore, B. C. and Kluk, K. (2002). Effects of low pass filtering on the intelligibility of speech in noise for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America 112(3 Pt 1), 1133-44.

Cox, R., Alexander, G., Johnson, J., Rivera, I. (2011). Cochlear dead regions in typical hearing aid candidates: Prevalence and implications for use of high-frequency speech cues. Ear and Hearing 32(3), 339 – 348.

Cox, R.M., Johnson, J.A. & Alexander, G.C. (2012).  Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss. Ear and Hearing 33(5), 573-87.

Hornsby, B. (2011) Dead regions and hearing aid fitting. Ask the Experts, Audiology Online October 3, 2011.

Huss, M. & Moore, B. (2005a). Dead regions and pitch perception. Journal of the Acoustical Society of America 117, 3841-3852.

Huss, M. & Moore, B. (2005b). Dead regions and noisiness of pure tones. International Journal of Audiology 44, 599-611.

Mackersie, C. L., Crocker, T. L. and Davis, R. A. (2004). Limiting high-frequency hearing aid gain in listeners with and without suspected cochlear dead regions. Journal of the American Academy of Audiology 15(7), 498-507.

Moore, B., Huss, M. & Vickers, D. (2000). A test for the diagnosis of dead regions in the cochlea. British Journal of Audiology 34, 205-224.

Moore, B. (2004). Dead regions in the cochlea: Conceptual foundations, diagnosis and clinical applications. Ear and Hearing 25, 98-116.

Moore, B. & Alcantara, J. (2001). The use of psychophysical tuning curves to explore dead regions in the cochlea. Ear and Hearing 22, 268-278.

Moore, B.C., Glasberg, B. & Vickers, D.A. (1999b). Further evaluation of a model of loudness perception applied to cochlear hearing loss. Journal of the Acoustical Society of America 106, 898-907.

Pepler, A., Munro, K., Lewis, K. & Kluk, K. (2014). Prevalence of Cochlear Dead Regions in New Referrals and Existing Adult Hearing Aid Users. Ear and Hearing 20(10), 1-11.

Schuknecht HF. Further observations on the pathology of presbycusis. Archives of Otolaryngology 1964;80:369—382

Vickers, D., Moore, B. & Baer, , T. (2001). Effects of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America 110, 1164-1175.

Vinay and Moore, B. C. (2007). Speech recognition as a function of high-pass filter cutoff frequency for people with and without low-frequency cochlear dead regions. Journal of the Acoustical Society of America 122(1), 542-53.

Vinay, Baer, T. and Moore, B. C. (2008). Speech recognition in noise as a function of high pass-filter cutoff frequency for people with and without low-frequency cochlear dead regions. Journal of the Acoustical Society of America 123(2), 606-9.

Evidence for the Value of Real-Ear Measurement

Abrams, H.B., Chisolm, T.H., McManus, M., & McArdle, R. (2012). Initial-fit approach versus verified prescription: Comparing self-perceived hearing aid benefit. Journal of the American Academy of Audiology, 23(10), 768-778.

Audiology best practice guidelines state that probe microphone verification measures should be done to ensure that hearing aid gain and output characteristics meet prescribed targets for the individual. In the American Academy of Audiology’s Guidelines for the Audiologic Management of Adult Hearing Impairment, an expert task force recommends that “prescribed gain from a validated prescriptive method should be verified using a probe microphone approach that is referenced to ear canal SPL” (Valente, et al., 2006). Similarly, the Academy’s Pediatric Amplification Protocol (AAA, 2003) states that hearing aid output characteristics should be verified with real-ear measures or with real-ear-to-coupler-difference (RECD) calculations when lengthy adjustments subsequent to real-ear measurement are not possible.

In contrast to these recommendations, the majority of hearing aid providers are not routinely conducting real-ear verification measures. In a survey of audiologists and hearing instrument specialists, Mueller and Picou (2010) found that respondents used real-ear verification only about 40% of the time and Bamford (2001) reported that only about 20% of individuals fitting pediatric patients used real-ear measures. The reasons most often cited for skipping probe microphone measures are based on financial, time, or space constraints.

When probe microphone measures are not conducted, other verification techniques may be used such as aided word recognition, but these not likely to provide reliable information (Thornton & Raffin, 1978).  Or, verification may not be attempted at all, with fitting parameters being chosen based on the manufacturer’s initial-fit specifications. Although most fitting software allows for entry of age, experience and acoustic information such as canal length and venting characteristics, their predictions are based on average data and cannot account for individual ear canal effects.

Numerous studies have shown that initial-fit algorithms often deviate significantly from prescribed targets, usually underestimating required gain, especially in the high frequencies. Hawkins & Cook (2003) found that simulated fittings from one manufacturer’s initial-fit algorithm over-estimated the coupler gain and in-situ response by as much as 20dB, especially in the low and high frequencies.  Bentler (2004) compared the 2cc coupler response from six different hearing aids programmed with initial-fit algorithms and found that the responses were different for each manufacturer and deviated from prescriptive targets by as much as 15dB, usually falling below prescribed targets. Similarly, Bretz (2006) studied three manufacturers’ pediatric first-fit algorithms and found that the average output varied by about 20dB and initial-fit gain values were below both NAL-NL1 and DSL (i/o) targets. This is of particular concern because pediatric patients may be less able than adults to provide subjective responses to hearing aid settings, rendering objective measures such as real-ear verification even more important.

These studies and others illuminate the potential difference between first-fit hearing aid settings and those verified by objective measures, but it is not well known how this affects the user’s perceived benefit.  Some early reports using linear amplification targets indicated that verification did not predict perceived benefit (Nerbonne et al., 1995; Weinstein et al., 1995), but more recent work indicates that adults fit to DSL v5.0a targets demonstrated benefit as measured by the Client Oriented Scale of Improvement (COSI, Dillon & Ginis, 1997). A recent survey by Kochkin et al. (2010) found that patients whose fittings were verified with a comprehensive protocol including real-ear verification reported increased hearing aid usage, benefit and satisfaction. Furthermore, these respondents were more likely to recommend their hearing care professional to friends and family than were the respondents who were not fitted with real-ear verification.

The purpose of the study discussed here was to determine if perceived hearing aid benefit differed based on whether the user was fitted with an initial-fit algorithm only or with modified settings based on probe-microphone verification. Twenty-two experienced hearing aid users with mild to moderately-severe hearing loss participated in the study. All were fitted with binaural hearing aids, though a variety of hearing aid styles and manufacturers were represented.  Probe microphone measurements were conducted on all subjects, but  those in the initial-fitting group did not receive adjustments based on the verification measures.

Perceived hearing aid benefit was measured using the Abbreviated Profile of Hearing Aid Benefit (APHAB, Cox & Alexander, 1995). The APHAB consists of 24 items in four subscales: ease of communication (EC), reverberation (RV), background noise (BN) and aversiveness of sounds (AV).  In addition to subscale scores, an average global score can be calculated, as well as a benefit score which represents the difference between unaided and aided responses.

Prior to being fitted with their hearing aids, participants completed the APHAB questionnaire. Because all were experienced hearing aid users, they were asked to base their answers on their experiences without amplification.  Hearing aid fittings and probe microphone verification were then conducted on all subjects, but half of the subjects received adjustments to match prescribed targets and half of the subjects maintained their first-fit settings. Efforts were made to ensure that subjects were not aware of the difference between the initial-fit and verified fitting methods. The only adjustments that subjects in the initial-fit group received were based on issues that could affect their willingness to wear the hearing aids, such as loudness discomfort or feedback.

One month following the first appointment, subjects returned to the clinic and were administered the APHAB again. They were given their initial “unaided” APHAB responses to use as a comparison. After completion of the APHAB, the subjects who had been fitted with the initial-fit algorithms were switched to verified fittings and those had been fitted to prescribed targets were switched to the manufacturer’s initial-fit settings. All subjects were re-tested with probe microphone measures and those with loudness or feedback complaints received minor adjustments.

One month after the second appointment, subjects returned to complete the APHAB and were again allowed to use their original APHAB responses as a basis for comparison. They were not allowed to view their responses to the APHAB that was administered after the first hearing aid trial. Participants were also asked to indicate which fitting method (Session 1 or Session 2) they preferred and would want permanently programmed into their hearing aids.

Analysis of the probe microphone measurements indicated, not surprisingly, that the verified fittings were more closely matched to prescriptive targets than the fittings based on the first-fit algorithms, even after minor adjustments based on comfort and user preferences.  For three of the APHAB subscales – ease of communication, reverberation and background noise – scores obtained with verified fittings were superior to those obtained with the initial-fit approach and the main effect of fitting approach was found to be statistically significant. There was no interaction between fitting approach and APHAB subscale, indicating that the better outcomes obtained with verified fittings were not related to any specific listening environment.

When asked to indicate their preferred fitting method, 7 of the 22 participants selected the initial-fit approach, whereas more than twice as many subjects, 15 out of 22, selected the verified fitting. For all but 5 subjects, the global difference score on the APHAB predicted their preferred fitting method, and the relationship between global score and final preference was statistically significant.

The findings of this study and of related reports bring up some philosophical and practical considerations for audiologists. One of our primary goals is to provide effective rehabilitation for hearing-impaired patients and this is most often accomplished by fitting and dispensing quality hearing instruments. Clinical and research data repeatedly indicates the importance of probe microphone verification. It serves the best interest of our patients to offer them the most effective fitting approach, so it follows that probe microphone verification measures should be a routine, essential part of our clinical protocol.

The reports that a minority of hearing aid fittings are being verified with real-ear measures indicates that many clinicians are not following recommended best practices. Indeed, Palmer (2009) points out that failure to follow best practice guidelines is a departure from the ethical standards of professional competence. Failure to provide the recommended objective verification for hearing aid fittings does run counter to our clinical goals and as Palmer suggests may even be damaging to our “collective reputation” as a profession.

Philosophical arguments notwithstanding, there are also practical reasons to incorporate real-ear measures into the fitting protocol. In the MarkeTrak VIII survey, Kochkin reported that hearing aid users who received probe microphone verification testing as part of a detailed fitting protocol were more satisfied with their hearing instruments and were more likely to refer their clinician to friends. In the current field of hearing aid service provision, it is important for audiologists to consider ways that they can meaningfully distinguish themselves from online, mail-order and big-box retail competitors. Hearing aid users are becoming well-informed consumers and it is clear that establishing a base of satisfied patients who feel they have received comprehensive, competent care is crucial for growing a private practice. Probe microphone verification is a brief yet effective part of ensuring successful hearing aid fittings and it benefits our patients and our profession to provide this essential service.


Abrams, H.B., Chisolm, T.H., McManus, M., & McArdle, R. (2012). Initial-fit approach versus verified prescription: Comparing self-perceived hearing aid benefit. Journal of the American Academy of Audiology, 23(10), 768-778.

American Academy of Audiology (2003). Pediatric Amplification Protocol., (accessed 3-3-13).

Bamford,  J., Beresford, D., Mencher, G.(2001). Provision and fitting of new technology hearing aids: implications from a survey of some “good practice services” in UK and USA. In: Seewald, R.C., Gravel, J.S., eds. A Sound Foundation Through Early Amplification: Proceedings of an International Conference. Stafa, Switzerland: Phonak AG, 213–219.

Bentler, R. (2004). Advanced hearing aid features: Do they work? Paper presented at the convention of the American Speech-Language-Hearing Association, Washington, D.C.

Bretz, K. (2006). A comparison of three hearing aid manufacturers’ recommended first fit to two generic prescriptive targets with the pediatric population. Independent Studies and Capstones, Paper 189. Program in Audiology and Communication Sciences, Washington University School of Medicine.

Cox, R. & Alexander, G. (1995). The abbreviated profile of hearing aid benefit. Ear and Hearing 16, 176-183.

Dillon, H. & Ginis, J. (1997). Client Oriented Scale of Improvement (COSI) and its relationship to several other measures of benefit and satisfaction provided by hearing aids. Journal of the American Academy of Audiology 8: 27-43.

Hawkins, D. & Cook, J. (2003). Hearing aid software predictive gain values: How accurate are they? Hearing Journal 56, 26-34.

Kochkin, S., Beck, D., & Christensen, L. (2010). MarkeTrak VIII: The impact of the hearing health care professional on hearing aid user success. Hearing Review 17, 12-34.

Mueller, H., & Picou, E. (2010). Survey examines popularity of real-ear probe-microphone measures. Hearing Journal 63, 27-32.

Nerbonne, M., Christman, W. & Fleschner, C. (1995). Comparing objective and subjective measures of hearing aid benefit. Poster presentation at the annual convention of the American Academy of Audiology, Dallas, TX.

Palmer, C.V. (2006). Best practice: it’s a matter of ethics. Audiology Today, Sept-Oct.,31-35.

Thornton, A. & Raffin, M. (1978) Speech-discrimination scores modeled as a binomial variable. Journal of Speech and Hearing Research 21, 507–518.

Valente, M., Abrams, H., Benson, D., Chisolm, T., Citron, D., Hampton, D., Loavenbruck, A., Ricketts, T., Solodar, H. &  Sweetow, R. (2006). Guidelines for the Audiological Management of Adult Hearing Impairment. Audiology Today, Vol 18.

Weinstein, B., Newman, C. & Montano, J. (1995). A multidimensional analysis of hearing aid benefit. Paper presented at the 1st Biennial Hearing Aid Research & Development Conference, Bethesda, MD.

Can Aided Audibility Predict Pediatric Lexical Development?

Stiles, D.J., Bentler, R.A., & McGregor, K.K. (2012). The speech intelligibility index and the pure-tone average as predictors of lexical ability in children fit with hearing aids. Journal of Speech Language and Hearing Research, 55, 764-778.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Despite advances in early hearing loss identification, hearing aid technology, and fitting and verification tools, children with hearing loss consistently demonstrate limited lexical abilities compared to children with normal hearing.  These limitations have been illustrated by poorer performance on tests of vocabulary (Davis et al., 1986), word learning (Gilbertson & Kamhi, 1995; Stelmachowicz et al., 2004), phonological discrimination, and non-word repetition (Briscoe et al., 2001; Delage & Tuller, 2007; Norbury, et al., 2001).

There are a number of variables that may predict hearing-impaired children’s performance on speech and language tasks, including the age at which they were first fitted with hearing aids and the degree of hearing loss.  Moeller (2000) found that children who received earlier aural rehabilitation intervention demonstrated significantly larger receptive vocabularies than those who received later intervention.  Degree of hearing loss, which is typically defined in studies by the pure-tone average (PTA) or the average of pure-tone hearing thresholds at 500Hz, 1000Hz, and 2000Hz (Fletcher, 1929), has been significantly correlated with speech recognition (Davis et al., 1986; Gilbertson & Kamhi, 1995), receptive vocabulary (Fitzpatrick et al., 2007; Wake et al., 2005), expressive grammar, and word recognition (Delage & Tuller, 2007) in some studies comparing hearing-impaired children to those with normal hearing.

In contrast, other studies have reported that pure-tone average (PTA) did not predict language ability in hearing-impaired children.  Davis et al. (1986) tested hearing-impaired subjects between five and18 years of age and found no significant relationship between PTA and vocabulary, verbal ability, reasoning, and reading.  However, all subjects scored below average on these measures, regardless of their degree of hearing loss.  Similarly, Moeller (2000) found that age of intervention affected vocabulary and verbal reasoning, but PTA did not.  Gilbertson and Kamhi (1995) studied novel word learning in hearing-impaired children ranging in age from  seven to 10 years and found that neither PTA nor unaided speech recognition threshold was correlated to receptive vocabulary level or word learning.

At a glance, it seems likely that degree of hearing loss should affect language development and ability, because hearing loss affects audibility, and speech must be audible in order to be processed and learned.  However, the typical PTA of thresholds at 500Hz, 1000Hz, and 2000Hz does not take into account high-frequency speech information beyond 2000Hz.  Some studies using averages of high-frequency pure-tone thresholds (HFPTA) have found a significant relationship between degree of loss and speech recognition (Amos & Humes, 2007; Glista et al., 2009).

Because most hearing-impaired children now benefit from early identification and intervention, their pure-tone hearing threshold averages (PTA or HFTPA) might not be the best predictors of speech and language abilities in every-day situations.  Rather, a measure that combines degree of hearing loss as well as hearing aid characteristics might be a better predictor of speech and language ability in hearing-impaired children.  The Speech Intelligibility Index (SII; ANSI,2007), a measure of audibility that computes  the importance of different frequency regions based on the phonemic content of a given speech test, has proven to be predictive of performance on speech perception tasks for adults and children (Dubno et al., 1989; Pavlovic et al., 1986; Stelmachowicz et al., 2000).  Hearing aid gain characteristics can be incorporated into the SII algorithm to yield an aided SII, which has been reported to predict performance on word repetition (Magnusson et al., 2001) and nonsense syllable repetition ability in adults (Souza & Turner, 1999).  Because an aided SII includes the individual’s hearing loss and hearing aid characteristics into the calculations, it better represents how audibility affects an individual’s daily functioning.

The purpose of the current study was to evaluate the aided SII as a predictor of performance on measures of word recognition, phonological working memory, receptive vocabulary, and word learning.  Because development in these areas establishes a base for later achievements in language learning and reading (Tomasello, 2000; Stanovich, 1986), it is important to determine how audibility affects lexical development in hearing-impaired children.  Though the SII is usually calculated based on the particular speech test to be studied, the current investigation used aided SII values based on average speech spectra.  The authors explained that vocabulary acquisition is a cumulative process, and they intended to use the aided SII as a measure of cumulative, rather than test-specific, audibility.

Sixteen hearing-impaired children with hearing aids (CHA) and 24 children with normal hearing (CNH) between six and nine years of age participated in the study.  All of the hearing-impaired children had bilateral hearing loss and had used amplification for at least one year.  All participants used spoken language as their primary form of communication.  Real-ear measurements were used to calculate the aided SII at user settings.  Because the goal was to evaluate the children’s actual audibility as opposed to optimal audibility, their current user settings were used in the experiment whether or not they met DSL prescriptive targets (Scollie et al., 2005).

Subjects participated in tasks designed to assess four lexical domains.  Word recognition was measured by the Lexical Neighborhood Test and Multisyllabic Lexical Neighborhood Test (LNT and MLNT; Kirk & Pisoni, 2000).  These tests each contain “easy” and “hard” lists, based on how frequently the words occur in English and how many lexical neighbors they have.  Children with normal lexical development are expected to show a gradient in performance with the best scores on the easy MLNT and poorest scores on the hard LNT.  Non-word repetition was measured by a task prepared specifically for this study, using non-words selected based on adult ratings of “wordlikeness”.  In the word recognition and non-word repetition tasks, children were simply asked to repeat the words that they heard.  Responses were scored according to the number of phonemes correct for both tasks.  Additionally, the LNT and MLNT tests were scored based on number of words correct.  Receptive vocabulary was measured by the Peabody Picture Vocabulary Test (PPVT-III; Dunn & Dunn, 1997) in which the children were asked to view four images and select the one that corresponds to the presented word.  Raw scores are determined as the number of items correctly identified and norms are applied based on the subject’s age.  Novel word learning was assessed using the same stimuli from the non-word repetition task, after the children were given sentence context and visual imagery to teach them the “meaning” of the novel words.  Their ability to learn the novel words was evaluated in two ways: a production task in which they were asked to say the word when prompted by a corresponding picture and an identification task in which they were presented with an array of four items and were asked to select the item that corresponded to the word that was presented.

On the word recognition tests, the children with hearing aids (CHA) demonstrated poorer performance than the children with normal hearing (CNH) for measures of word and phoneme accuracy, though both groups demonstrated the expected gradient, with performance improving in parallel fashion from the hard LNT test through the easy MLNT test.  There was a correlation between aided SII and word recognition scores, but PTA and aided SII were equally good at predicting performance.

On the non-word repetition task, which requires auditory perception, phonological analysis, and phonological storage (Gathercole, 2006), CHA again demonstrated significantly poorer performance than CNH, and CNH performance was near ceiling levels.  PTA and aided SII scores were correlated with non-word repetition scores.  Beyond the effect of PTA, it was determined that aided SII accounted for 20% of the variance on the non-word repetition task, which was statistically significant.

The receptive vocabulary test yielded similar results; CHA performed significantly worse than CNH and both PTA and aided SII accounted for a significant proportion of the variance.

The only variable that predicted performance on the word learning tasks was age, which only yielded a significant effect on the word production task.  On the word identification task, both the CHA and CNH groups scored only slightly better than chance and there were no significant effects of group or age.

As was expected in this study, children with hearing aids (CHA) consistently showed poorer performance than children with normal hearing (CNH), with the exception of the novel word learning task.  The pattern of results suggests that aided audibility, as measured by the aided SII, was better at predicting performance than degree of hearing loss as measured by PTA.  Greater aided SII scores were consistently associated with more accurate word recognition, more accurate non-word repetition, and larger receptive vocabulary.

Although PTA or HFPTA may represent the degree of unaided hearing loss, because the aided SII score accounts for the contribution of the individual’s hearing aids, it is likely a better representation of speech audibility and auditory perception in everyday situations.  The authors point out that depending on the audiometric configuration and hearing aid characteristics, two individuals with the same PTA could have different aided SIIs, and therefore different auditory experiences.

The results of this study underscore the importance of audibility for lexical development, which in turn has significant implications for further development of language, reading, and academic skills.  Therefore, the early provision of audibility via appropriate and verifiable amplification appears to be an important step in the development of speech and language.  The SII, which is already incorporated into some real-ear systems or is available in a standalone software package, is a verification tool that should be considered a standard part of the fitting protocol for pediatric hearing aid patients.



American National Standards Institute (2007). Methods for calculation of the Speech Intelligibility index (ANSI S3.5-1997[R2007]). New York, NY: Author.

Amos, N.E. & Humes, L.E. (2007). Contribution of high frequencies to speech recognition in quiet and noise in listeners with varying degrees of high-frequency sensorineural hearing loss. Journal of Speech, Language and Hearing Research 50, 819-834.

Briscoe, J., Bishop, D.V. & Norbury, C.F. (2001). Phonological processing, language and literacy: a comparison of children with mild-to-moderate sensorineural hearing loss and those with specific language impairment. Journal of Child Psychology and Psychiatry 42, 329-340.

Davis, J.M., Elfenbein, J., Schum, R. & Bentler, R.A. (1986). Effects of mild and moderate hearing impairments on language, educational and psychosocial behavior of children. Journal of Speech and Hearing Disorders 51, 53-62.

Delage, H. & Tuller, L. (2007). Language development and mild-to-moderate hearing loss: Does language normalize with age? Journal of Speech, Language and Hearing Research 50, 1300-1313.

Dubno, J.R., Dirks, D.D. & Schaefer, A.B. (1989). Stop-consonant recognition for normal hearing listeners and listeners with high-frequency hearing loss. II: Articulation index predictions. The Journal of the Acoustical Society of America 85, 355-364.

Dunn, L.M. & Dunn, D.M. (1997). Peabody Picture Vocabulary Test – III. Circle Pines, MN: AGS.

Fitzpatrick, E., Durieux-Smith, A., Eriks-Brophy, A., Olds., J. & Gaines, R. (2007). The impact of newborn hearing screening on communications development. Journal of Medical Screening 14, 123-131.

Fletcher, H. (1929). Speech and hearing in communication. Princeton, NJ: Van Nostrand Reinhold.

Gilbertson, M. & Kamhi, A.G. (1995). Novel word learning in children with hearing impairment. Journal of Speech and Hearing Research 38, 630-642.

Glista, D., Scollie, S., Bagatto, M., Seewald, R., Parsa, V. & Johnson, A. (2009). Evaluation of nonlinear frequency compression: Clinical outcomes. International Journal of Audiology 48, 632-644.

Kirk, K.I. & Pisoni, D.B. (2000). Lexical Neighborhood Tests. St. Louis, MO:AudiTEC.

Magnusson, L., Karlsson, M. & Leijon, A. (2001). Predicted and measured speech recognition performance in noise with linear amplification. Ear and Hearing 22, 46-57.

Moeller, M.P. (2000). Early intervention and language development in children who are deaf and hard of hearing. Pediatrics 106, e43.

Norbury, C.F., Bishop, D.V. & Briscoe, J. (2001). Production of English finite verb morphology: A comparison of SLI and mild-moderate hearing impairment. Journal of Speech, Language and Hearing Research 44, 165-178.

Pavlovic, C.V., Studebaker, G.A. & Sherbecoe, R.L. (1986). An articulation index based procedure for predicting the speech recognition performance of hearing-impaired individuals. The Journal of the Acoustical Society of America 80, 50-57.

Scollie, S.D., Seewald, R., Cornelisse, L., Moodie, S., Bagatto, M., Laurnagary, D. & Pumford, J. (2005). The desired sensation level multistage input/output algorithm. Trends in Amplification 9(4), 159-197.

Souza, P.E. & Turner, C.W. (1999). Quantifying the contribution of audibility to recognition of compression-amplified speech. Ear and Hearing 20, 12-20.

Stanovich, K.E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly 21, 360-407.

Stelmachowicz, P.G., Hoover, B.M., Lewis, D.E., Kortekaas, R.W. & Pittman, A.L. (2000). The relation between stimulus context, speech audibility and perception for normal hearing and hearing-impaired children. Journal of Speech, Language and Hearing Research 43, 902-914.

Stelmachowicz, P.G., Pittman, A.L., Hoover, B.M. & Lewis, D.E. (2004 ). Novel word learning in children with normal hearing and hearing loss. Ear and Hearing 25, 47-56.

Tomasello, M. (2000). The item-based nature of children’s early syntactic development. Trends in Cognitive Sciences 4, 156-163.

Wake, M., Poulakis, Z., Hughes, E.K., Carey-Sargeant, C. & Rickards, F.W. (2005). Hearing impairment: A population study of age at diagnosis, severity and language outcomes at 7-8 years. Archives of Disease in Childhood 90, 238-244.


The Tinnitus Functional Index (TFI): A New and Improved way to Evaluate Tinnitus

Meikle, M.B., Henry, J.A., Griest, S.E., Stewart, B.J., Abrams, H.B., McArdle, R., Myers, P.J., Newman, C.W., Sandridge, S., Turk, D.C., Folmer, R.L., Frederick, E.J., House, J.W., Jacobson, G.P., Kinney, S.E., Martin, W.H., Nagler, S.M., Reich, G.E., Searchfield, G., Sweetow, R. & Vernon, J.A. (2012). The Tinnitus Functional Index:  Development of a new clinical measure for chronic, intrusive tinnitus. Ear & Hearing 33(2), 153-176.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

The practice of clinical audiology can arguably be described as having two primary goals: the diagnosis of auditory and vestibular disorders, followed by verifiable, effective treatment and rehabilitation. There are well established, objective diagnostic tests for hearing and vestibular disorders and their treatment methods can be verified with objective and subjective tools. The evaluation and treatment of tinnitus, though equally important, is more complicated. There are test protocols for matching perceived tinnitus characteristics, but the impact of tinnitus on the individual must be measured subjectively with self-assessment questionnaires.  There are several published questionnaires to evaluate tinnitus severity and the impact it has on an individual’s activities, emotions and relationships. However, most of these questionnaires were not designed specifically to measure the effect of tinnitus treatments (Kamalski et al., 2010), so their value as follow-up measures is unknown.

Tinnitus affects as many as 50 million Americans and can have disabling effects including: sleep interference, difficulty concentrating and attending, anxiety, frustration and depression (for review see Tyler & Baker, 1983; Stouffer & Tyler, 1990; Axelsson, 1992; Meikle 1992; Dobie, 2004b). There are numerous methods of treatment available, including hearing aids, tinnitus maskers, tinnitus retraining therapy, biofeedback, counseling and others. Because there is currently no standard assessment tool to evaluate tinnitus treatment outcomes, the effectiveness of tinnitus treatment methods is difficult to verify and compare. The Tinnitus Functional Index (TFI) was developed as a collaborative effort among researchers and clinicians to develop a validated, standard questionnaire that can be used clinically for intake assessments and follow-up measurements and in the laboratory as a way of comparing treatment efficacy and identifying subject characteristics.

The developers of the TFI aimed for this inventory to be used in three ways:

1. As an intake evaluation tool to identify individual differences in tinnitus patients.
2. As a reliable and valid measurement of multiple domains of tinnitus severity.
3. As an outcome measure to assess treatment-related change in tinnitus.

The study, supported by a grant from the Tinnitus Research Consortium (TRC), had three stages. The first stage involved consultation with 21 tinnitus experts, including audiologists, otologists and hearing researchers. The panel of experts evaluated 175 items from nine previously published tinnitus questionnaires and judged them based on their relevance to 10 tinnitus negative impact domains as well as their expected responsiveness, or ability to measure treatment-related improvement. After analyzing the content validity, relevance and potential responsiveness of the 175 items (Haynes et al., 1995), 43 items were selected for the first questionnaire prototype. The TRC initially required that 10 domains of negative tinnitus impact be covered by the TFI but this expert panel added three additional domains so that the first prototype of the TFI covered 13 domains of tinnitus impact. The TRC also recommended avoiding overly negative items (such as those referring to suicidal thoughts or feeling victimized or helpless), items referring to hearing loss without mentioning tinnitus and items referring to more than one subtopic. Each domain had at least three or four items, based on recommendations for achieving adequate reliability (Fabrigar et al., 1999; Moran et al., 2001).  Each questionnaire item probed respondents for a rating on a scale of 0 to 10, based on how they experienced their tinnitus “over the past week”. For example, a typical question read, “Over the past week, how easy was it for you to cope with your tinnitus?” with potential responses from 0 being “very easy to cope” and 10 being “impossible to cope”.

During the second stage of the study, TFI Prototype 1 was tested on 326 tinnitus patients at five independent clinical sites.  The goals for the second stage were to determine the responsiveness of items or their ability to reflect changes in tinnitus status, to evaluate the 13 tinnitus impact domains and to determine the TFI’s ability to scale tinnitus severity. The questionnaire was administered at the initial intake assessment, after 3 months and after 6 months.  In addition to completing the TFI, at the initial encounter the subjects completed a brief tinnitus history questionnaire, The Tinnitus Handicap Inventory (THI; Newman et al., 1996) and the Beck Depression Inventory-Primary Care (BDEI-PC; Beck et al., 1997).  The TFI was re-administered to 65 subjects after 3 months and again to 42 subjects after 6 months.

The researchers found that subjects had very few problems responding to the 43 selected items and that most questionnaires were fully completed. There were no floor or ceiling effects, indicating that there were no items for which responses clustered at either end of the scale, reducing the potential responsiveness of the item.  The TFI had very high convergent validity, which means it agreed well with other published scales of tinnitus severity, such as the THI.  There were large effect sizes, demonstrating that the Prototype 1 items had good responsiveness for treatment-related change and supporting use of the TFI as an outcome measure. Factor analysis of the 13 initial tinnitus impact domains yielded 8 clearly structured domains, which were retained for the second prototype.

The third stage of the study involved development and evaluation of TFI Prototype 2, which was modified based on validity and reliability measurements from the first prototype. Prototype 2 included the 30 best-functioning items from the first version, categorized according to 8 tinnitus impact domains. It was administered to 347 new participants at the initial assessment. Follow-up data were obtained from 155 participants after 3 months and from 85 participants after 6-months. Encouragingly, the results from clinical evaluation of Prototype 2 again showed good performance for all of the validity and reliability measurements, supporting its use for scaling tinnitus severity.

The best performing items from Prototype 2 were used to create the final version of the TFI, which contains 25 items in 8 domains or sub-scales: Intrusive, Sense of Control, Cognitive, Sleep, Auditory, Relaxation, Quality of Life and Emotional. Seven of the domains contain 3 items and the Quality of Life domain contains 4 items.

When used during the initial assessment, the TFI categorizes tinnitus severity according to five levels: not a problem, a small problem, a moderate problem, a big problem or a very big problem.  As a screening tool, this allows a clinician to determine the overall severity of the tinnitus to help formulate a treatment plan and consider whether referrals to other clinical professionals are needed. For example, an individual who scores in the “not a problem” level may require only brief reassurance and counseling and may be asked to follow-up only if symptoms progress. In contrast, an individual who scores in the “big problem” or “very big problem” categories will likely need referrals for additional diagnostic and therapeutic services right away.

The developers of the TFI acknowledge that their study is preliminary and more research is needed to determine the TFI’s value as an outcome measurement tool. However, based on their analyses they recommend that a change in TFI score of 13 should be considered meaningful. In other words, a decrease of 13 or more indicates an improvement based on treatment recommendations or an increase in 13 or more indicates a significant exacerbation of symptoms.

Most tinnitus self-report questionnaires were designed to assess tinnitus impact but do not specifically measure treatment outcomes. The Tinnitus Handicap Inventory (THI; Newman et al., 1996), however, has shown some promise as an initial evaluation tool and as a way to measure treatment outcome.  After formulation of the final version of the TFI, the effect sizes of the TFI were compared to the THI. Overall, the TFI had greater responsiveness, indicating that it could potentially yield statistically significant differences with fewer subjects than the THI would require. Evaluation of subs-scale domains yielded some differences between the TFI and THI, primarily related to the “Catastrophic” subscale of the THI. Most of these items were not included in the TFI, based on the TRC’s recommendations to avoid questions dealing with negative ideation. The TRC recommended against inclusion of items relating to despair inability to escape tinnitus and fear of having a terrible disease, because they may suggest to people with mild tinnitus that they will eventually have these concerns, creating feelings of negativity before treatment has started.  Because these items on the THI correlated only moderately with the more neutrally worded items on the TFI, the authors suggested that the THI Catastrophic subscale might measure a different severity domain than the TFI and may be useful in combination with the THI as an outcome measure.

The Tinnitus Functional Index (TFI), like other previously published tinnitus questionnaires, shows promise as a tool to measure and classify tinnitus severity. It is easy for respondents to understand the test items and can be administered briefly at or prior to the initial appointment. An additional benefit of the TFI appears to be its validity as an outcome measure of treatment effectiveness. This is critically important for guiding clinical decisions and modifying ongoing treatment plans. It also suggests that the TFI could be useful in laboratory research as a standardized way to evaluate and compare tinnitus treatment methods or to identify subject characteristics for inclusion in treatment groups. For instance, if a treatment is expected to affect the negative emotional impact of tinnitus more than the functional impact, the TFI could be useful in identifying appropriate subject candidates who are experiencing strong emotional reactions to their tinnitus. The Tinnitus Functional Index (TFI) is one of the most systematically validated methods of assessing a patient’s reaction to their tinnitus. Ease of application and interpretation place the TFI among the most compelling assessment options for clinicians working with tinnitus patients.

If you would like to use the TFI. It is now available on a website posted by Oregon Health & Science University (OHSU). OHSU owns the copyright to the TFI and permission is required by OHSU to use the TFI. The request form takes 3 minutes to complete and allows you access to the TFI form and instructions. You will then be able to use the TFI with no fee.


Axelsson, A. (1992). Conclusion to Panel Discussion on Evaluation of Tinnitus Treatments. In J.M. Aran & R. Dauman (Eds) Tinnitus 91. Proceedings of the Fourth International Tinnitus Seminar (pp. 453-455). New York, NY: Kugler Publications.

Beck, A.T., Guth, D. & Steer, R.A. (1997). Screening for major depression disorders in medical in patients with the Beck Depression Inventory for Primary Care. Behavioral Research and Therapy 35, 785-791.

Dobie, R.A. (2004b). Overview: Suffering From Tinnitus. In J.B. Snow (Ed) Tinnitus: Theory and Management (pp.1-7). Lewiston, NY: BC Decker Inc.

Fabrigar, L.R., Wegeners, D.T. & MacCallum, R.C. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods 4, 272-299.

Kamalski, D.M., Hoekstra, C.E. & VanZanten, B.G. (2010). Measuring disease-specific health-related quality of life to evaluate treatment outcomes in tinnitus patients: A systematic review. Otolaryngology Head and Neck Surgery 143, 181-185.

Meikle, M.B. (1992). Methods for Evaluation of Tinnitus Relief Procedures. In J.M. Aran & R. Dauman (Eds.) Tinnitus 91: Proceedings of the Fourth International Tinnitus Seminar (pp. 555-562). New York, NY: Kugler Publications.

Meikle, M.B., Henry, J.A., Griest, S.E., Stewart, B.J., Abrams, H.B., McArdle, R., Myers, P.J., Newman, C.W., Sandridge, S., Turk, D.C., Folmer, R.L., Frederick, E.J., House, J.W., Jacobson, G.P., Kinney, S.E., Martin, W.H., Nagler, S.M., Reich, G.E., Searchfield, G., Sweetow, R. & Vernon, J.A. (2012). The Tinnitus Functional Index:  Development of a new clinical measure for chronic, intrusive tinnitus. Ear & Hearing 33(2), 153-176.

Moran, L.A., Guyatt, G.H. & Norman, G.R. (2001). Establishing the minimal number of items for a responsive, valid, health-related quality of life instrument. Journal of Clinical Epidemiology 54, 571-579.

Newman, C.W., Jacobson, G.P. & Spitzer, J.B. (1996). Development of the Tinnitus Handicap Inventory. Archives of Otolaryngology Head and Neck Surgery 122, 143-148.

Stouffer, J.L. & Tyler, R. (1990). Characterization of tinnitus by tinnitus patients. Journal of Speech and Hearing Disorders 55, 439-453.

Tyler, R. & Baker, L.J. (1983). Difficulties experienced by tinnitus sufferers. Journal of Speech and Hearing Disorders 48, 150-154.

The Top 5 Audiology Research Articles from 2012

2012 was an impressive year for scientific publication in audiology research and hearing aids. Narrowing the selection to 15 or 20 articles was far easier than selecting 5 top contenders. After some thought and discussion, here is our selection of the top 5 articles published in 2012.

1. Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss

Cox, R.M., Johnson, J.A., & Alexander, G.C. (2012). Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss. Ear and Hearing, 33, 573-587.

This article is the second in a series that investigated relationships between cochlear dead regions and benefits received from hearing aids. A sample of patients, diagnosed with high-frequency cochlear dead regions, demonstrated superior outcomes when prescribed hearing aids with a broadband response; as compared to a response that limited audibility at 1,000 Hz. These findings clearly illustrate that patients with cochlear dead regions benefit from—and prefer—amplification at frequencies similar to those with diagnosed cochlear dead regions.

2. The speech intelligibility index and the pure-tone average as predictors of lexical ability in children fit with hearing aids

Stiles, D.J., Bentler, R.A., & McGregor, K.K. (2012). The speech intelligibility index and the pure-tone average as predictors of lexical ability in children fit with hearing aids. Journal of Speech Language and Hearing Research, 55, 764-778.

The pure-tone threshold is the most commonly referenced diagnostic information when counseling families of children with hearing loss. This study compared the predictive value of pure-tone thresholds and the aided speech intelligibility index for a group of children with hearing loss. The aided speech intelligibility index proved to be a stronger predictor of word recognition, word repetition, and vocabulary. These observations suggest that a measure of aided speech intelligibility index is useful tool in hearing aid fitting and family counseling.

3. NAL-NL2 Empirical Adjustments

Keidser, G., Dillon, H., Carter, L., & O’Brien, A. (2012). NAL-NL2 Empirical Adjustments. Trends in Amplification, 16(4), 211-223.

The NAL-NL2 relies on several psychoacoustic models to derive target gains for a given hearing loss. Yet, it is well understood that these models are limiting and do not account for many individual factors. The inclusion of empirical adjustments to the NAL-NL2 highlights several factors that should be considered for prescribing gain to hearing aid users.

4. Initial-fit approach versus verified prescription: Comparing self-perceived hearing aid benefit

Abrams, H.B., Chisolm, T.H., McManus, M., & McArdle, R. (2012). Initial-fit approach versus verified prescription: Comparing self-perceived hearing aid benefit. Journal of the American Academy of Audiology, 23(10), 768-778.

While the outcomes of this study were not surprising, similar data had not been published in the refereed literature. The authors show that patients fit to a prescriptive target (i.e. NAL-NL1) report significantly better outcomes than patients fit to the lower gain targets that are offered in fitting softwares as ‘first-fit’ prescriptions. This study is a testimonial to the importance of counseling patients regarding audibility and the necessity of real-ear measurement to ensure audibility.

5. Conducting qualitative research in audiology: A tutorial

Knudsen, L.V., Laplante-Levesque, A., Jones, L., Preminger, J.E., Nielsen, C., Lunner, T., Hickson, L., Naylor, G., & Kramer, S.E. (2012). Conducting qualitative research in audiology: A tutorial. International Journal of Audiology, 51, 83-92.

A substantive majority of the audiologic research literature reports on quantitative data, discussing group outcomes and average trends. The challenges faced in capturing individual differences and clearly documenting field experiences require a different approach to data collection and analysis. Qualitative analysis leverages data from transcribed interviews or subjective reports to probe these anecdotal reports. This tutorial paper described methods for qualitative analysis and cites existing studies that have used these analyses.

The Tinnitus Handicap Inventory (THI): A quick and reliable method for measuring tinnitus outcomes

Newman, C.W., Sandridge, S.A. & Jacobson, G.P. (1998). Psychometric adequacy of the Tinnitus Handicap Inventory (THI) for evaluating treatment outcome. Journal of the American Academy of Audiology 9, 153-160.

This editorial discusses the clinical implications of an independent research study. This editorial does not represent the opinions of the original authors.

Tinnitus affects approximately 40-50 million people in the United States and an estimated 10-12 million people seek treatment for it (ATA, 2011; AAA, 2000). Though tinnitus has many potential causes, it often coincides with sensorineural hearing loss. In some cases medical or surgical treatment may be an option, but more often than not an individual with hearing loss and tinnitus will seek hearing aids. Therefore, clinical audiologists frequently encounter patients who suffer from tinnitus.

Because of the potentially disruptive effects of tinnitus on a patient’s ability to function and their sense of well-being, it is important for audiologists to include some estimation of tinnitus handicap in their overall clinical evaluation. Comprehensive diagnostic testing, including tinnitus pitch and loudness matching, should be supplemented with tinnitus self-report measures.  Self-report questionnaires elucidate the effect that the tinnitus has on the individual’s daily life. For instance, tinnitus can disrupt sleep and the ability to concentrate at work or in social interactions and can cause depression, irritability, frustration, stress and feelings of helplessness (Kochkin & Tyler, 1990). Examination of the emotional and social impact of tinnitus and how much it disturbs an individual’s daily activities is essential for determining the course of treatment.

There are a number of potential treatment approaches for tinnitus, including but not limited to: hearing aids, tinnitus maskers, combination hearing aid/masking devices, tinnitus retraining therapy, cognitive therapy, psychological counseling and stress management. Because any of these approaches may succeed with some patients and not others, it is essential to tailor the tinnitus rehabilitation program to each individual and to measure the efficacy of treatment to determine when a change in strategy is indicated.  Though a number of tinnitus questionnaires exist, many of them are limited in scope, difficult to score and interpret, or lack data to support their reliability and validity (Tyler, 1993). Tinnitus handicap questionnaires that are broad in scope and easy to administer and interpret are beneficial because clinicians are often working under time constraints. Test-retest reliability is particularly important if tinnitus self-report questionnaires are to be used to measure treatment outcomes.

The Tinnitus Handicap Inventory was developed as a brief, easily administered way to evaluate the disabling consequences of tinnitus (THI; Newman et al., 1996). It has potential for use in an initial evaluation of handicap or later as well as a way to measure treatment outcome. In the paper discussed here, Newman, Sandridge and Jacobson measured the test-retest reliability and repeatability of the THI, then used their findings to develop categories for the severity of perceived tinnitus handicap.

The THI is a 25-item questionnaire with items that are grouped into three subscales: functional, emotional and catastrophic responses.  The functional subscale items reflect the effect of tinnitus on mental, social, occupational and physical functioning. The emotional subscale items probe the individual’s emotional reactions to the tinnitus and the catastrophic response items address whether tinnitus makes the respondent feel desperate, trapped, hopeless or out of control.  A “yes” response is given 4 points, a “sometimes” response is given 2 points and a “no” response is given 0 points. The questionnaire yields scores for each subscale and a total score that ranges from 0 and 100, with high scores indicating a greater handicap.

Twenty-nine adult subjects, ranging in age from 23 to 87 years old, participated in the study. Subjects were patients at two outpatient Audiology clinics. All subjects presented with tinnitus as their primary complaint and most had gradually sloping, high-frequency, sensorineural hearing losses. The mean length of time that patients reported having tinnitus was 6 years and the mean length of time they had been “bothered” by the tinnitus was 3 years. Eleven participants reported unilateral tinnitus, whereas 18 reported bilateral tinnitus.  The participants reported, on average, that their tinnitus was present 90% of the time during waking hours.

Subjects completed the THI and a tinnitus case history questionnaire (modified from Stouffer and Tyler, 1990) following the scheduling of their initial appointment.  These forms were returned by mail prior to the visit. The second administration of the THI took place approximately 20 days later. This investigation was intended to measure test-retest reliability, which is the magnitude of agreement between two scores when the interval between them is short. The authors cited three reasons for this time frame. First, because many of the subjects were distressed by their tinnitus, they needed to be clinically evaluated and treated as soon as possible. Second, because tinnitus can fluctuate they wanted patients to make all of their judgments within a limited window of time. Third, the interval between initial clinical assessment and evaluation of treatment is often short. For instance, evaluation of the benefit of a tinnitus masker or hearing aid must be completed within the 30-day or 45-day trial period and one goal of the study was to assess the clinical value of the THI.

Results showed that the mean scores and standard deviations were comparable between the two THI administrations. Participants also maintained their relative standing on total and subscale scores from initial test to retest, as indicated by correlations ranging from .84 to .94. Repeatability was measured via calculation of difference scores and plots of their deviation from a difference score of zero. The THI was deemed to have acceptable repeatability because 95% of the difference scores fell within +/- 2 standard deviations from zero.  The repeatability measures allowed the investigators to determine how much of a difference in score would indicate a true difference in status for an individual patient. They found that the total THI scores on two separate administrations would have to differ by at least 20 points in order to be considered a true change. In other words, a clinician using the THI as a tool to measure treatment efficacy would have to see a decrease of at least 20 points to consider the treatment to be successful.

Following these analyses, quartiles were calculated from the mean total THI scores in order to assign scores to one of four handicap categories. On repeat administrations over time, movement from one category to another would indicate a change in tinnitus handicap status, either related to deterioration in the patient’s condition or an improvement based on treatment. The four handicap categories were as follows:

Quartile           Category                       Total THI Score

1st                   No handicap                       0-16

2nd                  Mild handicap                     18-36

3rd                   Moderate handicap              38-56

4th                   Severe handicap                 58-100

Self-reported scales are already widely used to illuminate a patient’s perceived hearing handicap and as a method of evaluating hearing aid fitting outcome or other aural rehabilitation efforts.  One of the primary goals of Newman, Sandridge and Jacobson’s study was to determine if the THI could be used as a clinical tool to evaluate tinnitus treatment outcomes. The reliability and repeatability of the THI suggests that it could be used in this way and it is a straightforward scale that is easy to administer and score. The authors suggest that the THI could be combined with other 25-item scales like the Hearing Handicap Inventory for Adults (HHIA, Newman et al., 1990) or Hearing Handicap Inventory for the Elderly (HHIE, Ventry & Weinstein, 1982) and the Dizziness Handicap Inventory (DHI, Jacobson & Newman, 1990) as a self-report inventory battery to evaluate initial handicap and efficacy of audiological and otological rehabilitation efforts. 

Tyler and Kochkin (1990) reported that 60% of tinnitus sufferers report benefit from the use of hearing aids and that 88% of hearing care professionals treat tinnitus with hearing aids.  Surr, et al. (1999) administered the THI before and six weeks after hearing aid fitting and reported that 90% of their participants demonstrated a significant reduction in THI scores. Because of the co-occurrence of tinnitus and sensorineural hearing loss, clinical audiologists frequently encounter tinnitus sufferers and may often be the first or only health professional to discuss tinnitus management options with the patient. It is important for audiologists to be familiar with tinnitus etiologies, evaluation techniques, treatment options and efficacy measures so they can provide proper guidance to their patients. Clinical appointments are often subject to time constraints, but the clinician is accountable for treatment outcomes, so brief but robust self-report inventories like the THI can be valuable clinical tools.


American Academy of Audiology (2000).  Audiologic guidelines for the evaluation and management of tinnitus.  AAA website,

American Tinnitus Association (2011). As cited in Beck, D., Hearing aid amplification and tinnitus: 2011 overview. Hearing Journal 64 (6), 12-13.

Jacobson, G.P. & Newman, C.W. (1990). The development of the Dizziness Handicap Inventory. Archives of Otolaryngology Head and Neck Surgery 116, 424-427.

Kochkin, S. & Tyler, R.S. (2008). Tinnitus treatment and the effectiveness of hearing aids – hearing care professional perceptions. Hearing Review 15(13), 14-18.

Newman, C.W., Weinstein, B.E., Jacobson, G.P & Hug, G.A. (1990). The Hearing Handicap Inventory for Adults: psychometric adequacy and audiometric correlates. Ear and Hearing 11, 176-180.

Newman, C.W., Jacobson, G.P. & Spitzer, J.B. (1996). Development of the Tinnitus Handicap Inventory. Archives of Otolaryngology Head and Neck Surgery 122, 143-148.

Newman, C.W., Sandridge, S.A. & Jacobson, G.P. (1998). Psychometric adequacy of the Tinnitus Handicap Inventory (THI) for evaluating treatment outcome. Journal of the American Academy of Audiology 9, 153-160.

Stouffer, J.L. & Tyler, R.S. (1990). Characterization of tinnitus by tinnitus patients. Journal of Speech and Hearing Disorders 55, 439-453.

Surr, R.K., Kolb, J.A., Cord, M.T. & Garrus, N.P. (1999). Tinnitus handicap inventory (THI) as a hearing aid outcome measure. Journal of the American Academy of Audiology 10(9), 489-495.

Tyler, R.S. (1993). Tinnitus disability and handicap questionnaires. Seminars in Hearing 14, 377-384.

Ventry, I. & Weinstein, B. (1982). The Hearing Handicap Inventory for the Elderly: a new tool. Ear and Hearing 3, 128-134.

Can preference for one or two hearing aids be predicted?

Noble, W. (2006). Bilateral hearing aids: A review of self-reports of benefit in comparison with unilateral fitting. International Journal of Audiology, 45(Supplement 1), S63-S71.

This editorial discusses the clinical implications of an independent research study. This editorial does not represent the opinions of the original authors.

The potential benefits of bilateral and unilateral hearing aids have been debated for years. Laboratory studies and clinical recommendations generally support the use of two hearing aids for individuals with bilateral hearing loss. Yet some field studies have produced equivocal reports. In his 2006 survey of bilateral and unilateral clinical field trials, William Noble discusses variables contributing to the lack of consensus and addresses a couple of commonly cited clinical rationales for bilateral hearing aid use. Though subject population, experimental design, degree of hearing loss and usage patterns vary from one study to another, factors emerge that help determine likelihood of success in unilateral and bilateral hearing aid fitting.

The strong predisposition for clinicians to recommend bilateral hearing aid use may be based both on laboratory findings as well as common sense. Several studies have reported advantages of binaural listening (Dillon, 2001; McArdle et al., 2012), but clinicians often support their recommendation of two hearing aids with an analogy to binocular vision.  Individuals with impaired vision no longer wear monocles (with apologies to English detectives) but instead opt for binocular corrective lenses. Noble argues that the visual analogy is not apt, partly because typical vision loss is not comparable to the typical hearing loss. Rather, vision loss that is treated by corrective lenses is most similar to a mild conductive hearing loss that is rarely treated with hearing aids. Cochlear receptor damage in sensorineural hearing loss, the most common type of hearing loss treated with hearing aids, introduces processing complexities that may not be adequately corrected by hearing aids that cannot address the auditory deficits.

Though Noble’s comments are correct, the visual analogy is presented to hearing aid patients in an effort to explain how two hearing aids allow more effective use of bilateral listening cues, much as two corrective lenses can aid binocular, stereoscopic cues for depth and three-dimensional perception.  Some patients may have difficulty understanding how two hearing aids can provide beneficial cues, especially in noise, instead of additional distraction. The visual analogy, while admittedly not perfect, is a way of explaining more simply the benefit of bilateral perceptual cues.

The other commonly cited clinical rationale for bilateral hearing aid use is related to the auditory deprivation effect (Silman et al., 1984). In individuals with bilateral hearing loss, there is concern that if only one hearing aid is used, the unaided ear will be deprived of sound and suffer additional deterioration. This appears to be a long term change in the unaided ear, affecting word recognition scores but not pure tone or speech reception thresholds.  Whether or not the unaided ear effect has implications for everyday hearing aid use and the ability to function in social and work-related situations requires further investigation to determine whether it is an important consideration for clinical recommendations.

Though most laboratory studies support the benefits of binaural listening, field studies and self-reports on bilateral hearing aid use have not always provided similar outcomes (Arlinger et al., 2003; Cox, 2011). For this reason, Noble reported on evidence from clinical trials to determine the conditions under which bilateral hearing aid use is most likely to be beneficial and to determine what patient attributes most support the recommendation of bilateral hearing aid use. Three of the reviewed studies were retrospective or reports from clinical patients occurring months or years after being fitted with unilateral or bilateral hearing aids. Two studies (Dirks & Carhart, 1962; Kochkin & Kuk, 1997) suggested that people who preferred bilateral hearing aid fittings had greater levels of hearing loss, though degree of hearing loss was not controlled. A third study by Noble, et al (1995) carefully matched 17 sets of unilateral and bilateral users according to degree of hearing loss. They examined speech reception and directional and distance spatial perception in aided and unaided conditions. Significant benefits were seen when comparing aided and unaided conditions but no differences were observed between unilaterally aided and bilaterally aided groups. The subject sample had mild to moderate hearing losses, so some caution should be taken when extrapolating these findings to individuals with more severe hearing losses.

In contrast, a study of new hearing aid users resulted in a two-to-one preference for unilateral use after a six-month period (Schreurs & Olsen, 1985). Individuals in this study wore one aid and two aids alternately for one week at a time, which arguably could have adversely influenced their acclimatization. Hearing aid users experience a sometimes extended period of adjustment to amplification (Keidser, 2009) which can affect their subjective judgments of sound quality and overall benefit (Bentler, et al, 1993). For instance, occlusion and unnatural perception of one’s own voice are qualities that can be annoying to new hearing aid users and are often more pronounced with bilateral hearing aids. Though these qualities almost always improve significantly with consistent bilateral use, it is not surprising that inexperienced, intermittent bilateral users might prefer the subjective sound quality of wearing one hearing aid at a time.  Additionally, Schreurs & Olsen’s study was conducted in 1985, at which time directional microphones were not in widespread use. Hearing aid users at that time often removed one hearing aid in noisy situations because bilateral omnidirectional microphones made surrounding noise sources too disruptive.  A field study of unilateral versus bilateral use with modern hearing aids, allowing for adequate acclimatization, might yield different results.

Two follow-up studies examined subjects that were slightly younger than a typical clinical population (Brooks & Bulmer, 1981; Erdman & Sedge, 1981). Both of these studies found that a majority of subjects preferred the use of two hearing aids.  These reports suggest that individuals whose activities require effective communication in challenging listening situations may prefer bilateral hearing aids. The remaining studies in Noble’s review were crossover studies, or experiments in which clinical patients were randomly assigned to a unilateral or bilateral condition and crossed over to the other condition after several weeks.  Stephens, et al (1991) found a greater degree of hearing loss and self-rated disability in the individuals who opted for bilateral hearing aid use; consistent with the retrospective reports of Dirks & Carhart (1962) and Kochkin & Kuk (1997) discussed earlier.

The studies examined in this literature review reveal several patterns. First, individuals with more severe hearing loss or perceived disability were more likely to prefer bilateral hearing aids. Subjects who preferred unilateral hearing aid use tended to have mild to moderate losses.  Second, participants employed in dynamic listening situations preferred bilateral hearing aid use, suggesting that individuals whose regular activities require effective communication in a variety of contexts may be more likely to benefit from the use of two hearing aids (Noble & Gatehouse, 2006).

Noble points out that laboratory studies cannot adequately consider the range of experiences encountered by hearing aid users in everyday situations. Laboratory research isolates variables for study, with subjects responding to specific stimuli in isolated, carefully contrasted conditions, whereas in everyday life, hearing aid users encounter a wide range of listening situations ranging from single speech sources in quiet conditions to multiple speech sources in the presence of competing noise. Conversely, clinical field trials probe the subjective responses of hearing aid users in a variety of real-world situations, but it can be difficult to extricate the specific variables affecting their perceptions. Though their outcomes may not always appear to be in agreement, both types of study provide useful information to guide clinical practice.

It is clear that bilateral hearing aid use has potential to reduce listening effort (Feuerstein, 1992), improve speech understanding, localization and receptiveness to lateral sounds (Noble & Gatehouse, 2006). Still, field studies consistently report a subset of patients that preference for unilateral hearing aid use. Whether environmentally or psychoacoustically motivated, the factors that underlie these preferences remain unclear. With consideration to documented benefits, bilateral hearing loss should first be treated with the prescription of bilateral hearing aids. Consideration for unilateral use should happen after the patient has adequate field experience and expresses subjective preference for the option of unilateral amplification.


Arlinger, S., Brorsson, B., Lagerbring, C., Leijon, A., Rosenhall, U. & Schersten, T. (2003). Hearing Aids for Adults – benefits and costs. Stockholm: Swedish Council on Technology Assessment in Health Care.

Bentler, R.A., Niebuhr, D.P., Getta, J.P. & Anderson, C.V. 1993b. Longitudinal study of hearing aid effectiveness. II. Subjective measures. Journal of Speech and Hearing Research 36, 820-831.

Byrne, D., Noble, W. & LePage, B. (1992). Effects of long0term bilateral and unilateral fitting of different hearing aid types on the ability to locate sounds. Journal of the American Academy of Audiology 3, 369-382.

Cox, R.M., Schwartz, K.S., Noe, C.M. & Alexander, G.C. (2011). Preference for one or two hearing aids among adult patients. Ear and Hearing 32 (2), 181-197.

Dillon, H. (2001). Monaural and binaural considerations in hearing aid fitting. In: Dillon, H., ed. Hearing Aids. Turramurra, Australia: Boomerang Press, 370-403.

Dirks, D. & Carhart, R. (1962). A survey of reactions from users of binaural and monaural hearing aids. Journal of Speech and Hearing Disorders 27(4), 311-322.

Feuerstein, J.F. (1992). Monaural versus binaural hearing: Ease of listening, word recognition and attentional effort. Ear & Hearing 13(2), 80-86.

Keidser, G., O’Brien, A., Carter, L., McLelland, M. & Yeend, I. (2009).  Variation in preferred gain with experience for hearing-aid users. International Journal of Audiology 2008 (47), 621-635.

Kochkin, S. & Kuk, F. (1997). The binaural advantage: Evidence from subjective benefit and customer satisfaction data. The Hearing Review, 4.

McArdle, R., Killion, M., Mennite, M. & Chisolm, T. (2012).  Are Two Ears Not Better Than One? Journal of the American Academy of Audiology 23, 171-181.

Noble, W. (2006). Bilateral hearing aids: A review of self-reports of benefit in comparison with unilateral fitting. International Journal of Audiology, 45(Supplement 1), S63-S71.

Noble, W. & Gatehouse, S. (2006). Effects of bilateral versus unilateral hearing aid fitting on abilities measured by the Speech, Spatial and Qualities of Hearing Scale (SSQ). International Journal of Audiology 45(2), 172-181.

Noble, W., TerHorst, K. & Byrne, D. (1995). Disabilities and handicaps associated with impaired auditory localization. Journal of the American Academy of Audiology 6(2), 129-140.

Silman, S., Gelfand, S. & Silverman, C. (1984). Late-onset auditory deprivation: Effects of monaural versus binaural hearing aids. Journal of the Acoustical Society of America 76, 1357-1362.

True or False? Two hearing aids are better than one.

McArdle, R., Killion, M., Mennite, M. & Chisolm, T. (2012).  Are Two Ears Not Better Than One? Journal of the American Academy of Audiology 23, 171-181.

This editorial discusses the clinical implications of an independent research study. This editorial does not represent the opinions of the original authors.

Audiologists are accustomed to recommending two hearing aids for individuals with bilateral hearing loss, based on the known benefits of binaural listening (Carhart, 1946; Keys, 1947; Hirsh, 1948; Koenig, 1950), though the potential advantages of binaural versus monaural amplification have been debated for many years.

One benefit of binaural listening, binaural squelch, occurs when the signal and competing noise come from different directions (Kock, 1950; Carhart, 1965). When the noise and signal come from different directions, time and intensity differences cause the waveforms arriving at each ear to be different, resulting in a dichotic listening situation. The central auditory system is thought to combine these two disparate waveforms and essentially subtract the waveform arriving at one side from that of the other, resulting in an effective SNR improvement of about 2-3dB (Dillon, 2001).

Binaural redundancy, another potential benefit of listening with two ears, is an advantage created simply by receiving similar information in both ears. Dillon (2001) describes binaural redundancy as allowing the brain to get two “looks” at the same sound, resulting in SNR improvement of another 1-2 dB (MacKeith & Coles, 1971; Bronkhorst & Plomp, 1988).

Though the benefits of binaural listening would imply benefits of binaural amplification as well, there has been a lack of consensus among researchers. Some studies have reported clear advantages to binaural amplification over monaural fittings, but others have not. Decades ago a number of studies were published on both sides of the argument, but differences in outcomes may have been related to speaker location and the presentation angles of the speech and noise signals (Ross, 1980) so the potential advantages of binaural amplification were still unclear.

Some recent reports have supported the use of monaural amplification over binaural for some individuals, in objective and subjective studies. Henkin et al. (2007) reported that 71% of their subjects performed better on a speech-in-noise task when fitted with one hearing aid on the “better” ear than when fitted with two hearing aids. Cox et al. (2011) reported that 46% of their subjects preferred to use one hearing aid rather than two.

In contrast, a report by Mencher & Davis (2006) concluded that 90% of adults perform better with two hearing aids. They explained that 10% of adults may have experienced negative binaural interaction or binaural interference, which is described as the inappropriate fusion of signals received at the two ears (Jerger et al., 1993; Chmiel et al., 1997).

The phenomenon of binaural interference and the potential advantage of monaural amplification was investigated by Walden & Walden (2005). In a speech recognition in noise task in which speech and the competing babble were presented through a single loudspeaker at 0-degrees azimuth, they found that performance with one hearing aid was better than binaural for 82% of their participants. This is in contrast to Jerger’s (1993) report of an incidence of 8-10% for subjects that might have experienced binaural interference. One criticism of Walden & Walden’s study is that their “monaural” condition left the unaided ear open. Their presentation level of 70dB HL and the use of subjects with mild to moderate hearing loss indicates that subjects were still receiving speech and noise cues in the unaided ear, resulting in an albeit modified, binaural listening situation. Furthermore, their choice of one single loudspeaker for presentation of noise and speech directly in front of the listener created a diotic listening condition, which eliminated the use of binaural head shadow cues. This methodology may have limited their study’s relevance to typical everyday situations in which listeners are engaged in face to face conversation with competing noise all around.

Because the potential advantages or disadvantages of binaural amplification have such important clinical implications, Rachel McArdle and her colleagues sought to clarify the issue with a two-part study of monaural and binaural listening. The first experiment was an effort to replicate Walden and Walden’s 2005 sound field study, this time adding a true monaural condition and an unaided condition. The second experiment examined monaural versus diotic and dichotic listening conditions, using real-world recordings from a busy restaurant.

Twenty male subjects were recruited from the Bay Pines Veteran’s Affairs Medical Facility. Subjects ranged in age from 59 to 85 years old and had bilateral, symmetrical hearing losses. All were experienced users of binaural hearing aids.

For the first experiment, subjects wore their own hearing aids, so a variety of models from different manufacturers were represented. Hearing aids were fitted according to NAL-NL1 prescriptive targets and were verified with real-ear measurements. All of the hearing aids were multi-channel instruments with directional microphones, noise reduction and feedback management. None of the special features were disabled during the study.

Subjects were tested in sound field, with a single loudspeaker positioned 3 feet in front of them. They were tested under five conditions: 1) right ear aided, left ear open, 2) left ear aided, right ear open, 3) binaurally aided, 4) right ear aided, left ear plugged (true monaural) and 5) unaided. The QuickSIN test (Killion et al, 2004) was used to evaluate sentence recognition in noise in all of these conditions. The QuickSIN test yields a value for “SNR loss”, which represents the SNR required to obtain a score of 50% correct for key words in the sentences.

The primary question of interest for the first experiment asked whether two aided ears would achieve better performance than one aided ear. The results showed that only 20% of the participants performed better with one aid, whereas 80% performed better with binaural aids. The lowest SNR loss values were for the binaural condition, followed by right ear aided, left ear aided, true monaural (with left ear plugged) and unaided. Analysis of variance revealed that the binaural condition was significantly better than all other conditions. The right-ear only condition was significantly better than unaided, but all other comparisons failed to reach significance.

The results of Experiment 1 are comparable to results reported by Jerger (1993) but contrast sharply with Walden and Walden’s 2005 study, in which 82% of respondents performed better monaurally aided.  To compare their results further to Walden and Walden’s, McArdle and her colleagues compiled scores for the subjects’ better ears and found that there was no significant difference between binaural and better ear performance, but both of these conditions were significantly better than the other conditions. They also examined the effect of degree of hearing loss and found that individuals with hearing thresholds poorer than 70dB HL were able to achieve about twice as much improvement from binaural amplification as those subjects with better hearing. Still, the results of Experiment 1 support the benefit of binaural hearing aids for most participants, especially those with poorer hearing.

The purpose of Experiment 2 was to further examine the potential benefit of hearing with two ears, using diotic and dichotic listening conditions. Diotic listening refers to a condition in which the listener receives the same stimulus in both ears, whereas dichotic listening refers to more typical real-world conditions in which each ear receives slightly different information, subject to head shadow and time and intensity differences.

Speech recognition was evaluated in four conditions: 1) monaural right, 2) monaural left, 3) diotic and 4) binaural or dichotic. Materials for the R-SPACE QSIN test (Revit, et al., 2007) were recorded through a KEMAR manikin with competing restaurant noise presented through eight loudspeakers. Recordings were taken from eardrum-position microphones on each side of KEMAR, thus preserving binaural cues that would be typical for a listener in a real-world setting.

In Experiment 2, subjects were not tested wearing hearing aids; the stimuli were presented via insert earphones. The use of recorded stimuli presented under earphones eliminated the potentially confounding factor of hearing aid technology on performance and allowed the presentation of real-world recordings in truly monaural, diotic and dichotic conditions.

The best performance was demonstrated in the binaural condition, followed by the diotic condition, then the monaural conditions. The binaural condition was significantly better than diotic and both the diotic and dichotic conditions were significantly better than the monaural conditions. Again in contrast to Walden and Walden’s study, 80% of the subjects scored better in the binaural condition than either of the monaural conditions and 65% of the subjects scored better in the diotic condition than either monaural condition. These results support the findings of the first experiment and indicate that for the majority of listeners, speech recognition in noise improves when two ears are listening instead of one. Furthermore, the finding that the binaural condition was significantly better than the diotic condition indicates that it is not only the use of two ears, but also the availability of binaural cues that have a positive impact on speech recognition in competing noise.

McArdle and her colleagues point out that their study, as well as other recent reports (Walden & Walden, 2005; Henkin, 2007), suggests that the majority of listeners perform better on speech-in-noise tasks when they are listening with two ears. When binaural time and intensity cues are available, performance is even better than with the same stimulus reaching each ear.  They also found that the potential benefit of binaural hearing was even more pronounced for individuals with more severe hearing loss. This supports the recommendation of binaural hearing aids for individuals with bilateral hearing loss, especially those with severe loss.

Cox et al (2011) reported that listeners who experienced better performance in everyday situations tended to prefer binaural hearing aid use, but also found that 43 out of 94 participants generally preferred monaural to binaural use over a 12-week trial. For new hearing aid users or prior monaural users, this is not surprising, as it can take time to adjust to binaural hearing aid use. Clinical observation suggests that individuals who have prior monaural hearing aid experience may have more difficulty adjusting to binaural use than individuals who are new to hearing aids altogether.  However, with consistent daily use, reasonable expectations and appropriate counseling, most users can successfully adapt to binaural use. It is possible that the subjects in Cox et al’s study who preferred monaural use were responding to factors other than performance in noise. If they were switching between monaural and binaural use, perhaps they did not wear the two instruments together consistently enough to fully acclimate to binaural use in the time allotted.

Though their study presented strong support for binaural hearing aid use, McArdle and her colleagues suggest that listeners may benefit from “self-experimentation” to determine the optimal configuration with their hearing aids. This suggestion is an excellent one, but it may be most helpful within the context of binaural use. Even patients with adaptive and automatic programs can be fitted with manually accessible programs designed for particularly challenging situations and should be encouraged to experiment with these programs to determine the optimal settings for their various listening needs.

Clinicians who have been practicing for several years may recall the days when hearing aid users often lost their hearing aids in restaurants because they had removed one aid in order to more easily ignore background noise. That is less likely to occur now, as current technology can help most hearing aid users function quite well in noisy situations. With directional microphones and multiple programs, along with the likelihood that speech and background noise are often spatially separated, binaural hearing aids are likely to offer advantageous performance for speech recognition in most acoustic environments. Bilateral data exchange and wireless communication offer additional binaural benefits, as two hearing instruments can work together to improve performance in noise and provide binaural listening for telephone or television use.


Bronkhorst, A.W. & Plomp, R. (1988). The effect of head induced interaural time and level differences on speech intelligibility in noise. Journal of the Acoustical Society of America 83, 1508-1516.

Carhart, R. (1965). Problems in the measurement of speech discrimination. Archives of Otolaryngology 82, 253-260.

Carhart, R. (1946). Selection of hearing aids. Archives of Otolaryngology 44, 1-18.

Chmiel, R., Jerger, J., Murphy, E., Pirozzolo, R. & Tooley, Y.C. (1997). Unsuccessful use of binaural amplification by an elderly person. Journal of the American Academy of Audiology 8, 1-10.

Cox, R.M., Schwartz, K.S., Noe, C.M. & Alexander, G.C. (2011). Preference for one or two hearing aids among adult patients. Ear and Hearing 32 (2), 181-197.

Dillon, H. (2001). Monaural and binaural considerations in hearing aid fitting. In: Dillon, H., ed. Hearing Aids. Turramurra, Australia: Boomerang Press, 370-403.

Henkin, Y., Waldman, A. & Kishon-Rabin, L. (2007). The benefits of bilateral versus unilateral amplification for the elderly: are two always better than one? Journal of Basic and Clinical Physiology and Pharmacology 18(3), 201-216.

Hirsh, I.J. (1948). Binaural summation and interaural inhibition as a function of the level of masking noise. American Journal of Psychology 61, 205-213.

Jerger, J., Silman, S., Lew, J. & Chmiel, R. (1993). Case studies in binaural interference: converging evidence from behavioral and electrophysiologic measures. Journal of the American Academy of Audiology 4, 122-131.

Keys, J.W. (1947). Binaural versus monaural hearing. Journal of the Acoustical Society of America 19, 629-631.

Killion, M.C., Niquette, P.A., Gudmundsen, G.I., Revit, L.J. & Banerjee, S. (2004). Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal hearing and hearing-impaired listeners. Journal of the Acoustical Society of America 116, 2395-2405.

Kock, W.E. (1950). Binaural localization and masking. Journal of the Acoustical Society of America 22, 801-804.

Koenig, W. (1950). Subjective effects in binaural hearing. [Letter to the Editor] Journal of the Acoustical Society of America 22, 61-62.

MacKeith, N.W. & Coles, R.A. (1971). Binaural advantages in hearing speech. Journal of Laryngology and Otology 85, 213-232.

McArdle, R., Killion, M., Mennite, M. & Chisolm, T. (2012).  Are Two Ears Not Better Than One? Journal of the American Academy of Audiology 23, 171-181.

Mencher, G.T. & Davis, A. (2006) Binaural or monaural amplification: is there a difference? A brief tutorial. International Journal of Audiology 45, S3-S11.

Revit, L., Killion, M. & Compton-Conley, C. (2007). Developing and testing a laboratory sound system that yields accurate real-world results. Hearing Review online edition,, October 2007.

Ross, M. (1980). Binaural versus monaural hearing aid amplification for hearing impaired individuals. In: Libby, E.R., Ed. Binaural Hearing and Amplification. Chicago: Zenetron, 1-21.

Walden, T.C. & Walden, B.E. (2005). Monaural versus binaural amplification for adults with impaired hearing. Journal of the American Academy of Audiology 16: 574-584.