Starkey Research & Clinical Blog

Do Patients with Severe Hearing Loss Benefit from Directional Microphones?

Ricketts, T.A., & Hornsby, B.W.Y. (2006). Directional hearing aid benefit in listeners with severe hearing loss. International Journal of Audiology, 45, 190-197.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Hearing Technologies. This editorial does not represent the opinions of the original authors.

The benefit of directional microphones for speech recognition in noise is well established for individuals with mild to moderate hearing loss (Madison & Hawkins, 1983; Killion et al., 1998; Ricketts 2000a; Ricketts & Henry, 2002).  The potential benefit of directional microphones for severely hearing-impaired individuals is less understood and few studies have examined directional benefit when hearing loss is greater than 65dB.

Killion and Christensen (1998) proposed that listeners with severe-to-profound hearing loss may experience reduced directional benefit because they are less able to make use of speech information across frequencies. Ricketts, Henry and Hornsby confirmed this hypothesis in a 2005 study. They found an approximately 7% increase in speech recognition score per 1dB increase in directivity for listeners with moderate hearing loss, whereas listeners with severe loss achieved only an approximately 3.5% increase per 1dB increase in directivity.

In their 2005 study, Ricketts and Hornsby used individually determined SNRs and auditory-visual stimuli that allowed testing at poorer SNRs without floor effects. The authors point out that visual cues usually offer a greater benefit at poor SNRs, especially for sentence materials (Erber, 1969; Sumby & Pollack, 1954; MacLeod & Summerfield, 1987).  Individuals rely more on visual cues in poorer SNRs, visual information that provides complementary, non-redundant cues is most beneficial (Grant, 1998; Walden et al., 1993).

The purpose of their study was to examine potential directional benefit for severely hearing-impaired listeners at multiple SNRs in auditory-only and auditory-visual conditions. Directional and omnidirectional performance in quiet conditions were also tested to rule out performance differences between microphone modes that could be attributed to reduction of environmental noise by the directional microphone. Finally, it was determined if performance in quiet conditions would significantly exceed performance in highly positive SNRs. Though significant improvement in SNRs more favorable than +15 dB is usually not expected, some research suggests that hearing-impaired individuals may experience additional benefit from more favorable SNRs (Studebaker et al, 1999).

Twenty adult participants with severe-to-profound sensorineural hearing loss participated in the study. All participants used oral communication, had at least nine years of experience with hearing aids and had pure tone average hearing thresholds greater than 65dB.  Participants were fitted with power behind-the-ear hearing aids with full shell, unvented earmolds. Digital noise reduction and feedback management was turned off. The directional program was equalized, so that gain matched the omnidirectional mode as closely as possible.

The Audio/Visual Connected Speech Test (CST; Cox et al, 1987), a speech recognition test with paired passages of connected speech, was presented to listeners on DVD. Speech was presented at a 0-degree azimuth angle and uncorrelated competing noise was presented via five loudspeakers surrounding the listener. Testing took place in a sound booth with reflective panels to approximate common levels of reverberation in everyday situations.

Baseline SNRs were obtained for each subject in auditory-only and auditory-visual conditions, at a level that was near, but not reaching floor performance. Speech recognition testing was conducted for omnidirectional and directional conditions at baseline SNR, baseline + 4dB and baseline + 8dB. Presentation SNRs ranged from 0dB to +24dB for auditory-only conditions and from -6dB to +18dB for auditory-visual conditions. Listeners were tested with auditory-only stimuli in quiet conditions, for omnidirectional and directional modes. Testing in quiet was not performed with auditory-visual stimuli, as performance was expected to approach ceiling performance levels.

The multiple SNR levels were achieved with two different methodologies. Half of the participants listened to a fixed noise level of 60dB SPL and speech levels were varied to achieve the desired SNRs. The remaining participants listened to a fixed speech level of 67dB SPL and the noise levels were adjusted to reach the desired SNR levels. Data analysis revealed no significant differences between these two test methodologies for any of the variables, so their data was pooled for subsequent analyses.

The results showed significant main effects for microphone mode (directional versus omni), SNR and presentation condition (auditory-only versus auditory-visual). There were significant interactions between microphone mode and SNR, as well as between presentation condition and SNR.  Each increase in SNR resulted in significantly better performance for both omnidirectional and directional modes. Performance in directional mode was significantly better than omnidirectional for all SNR levels. The authors pointed out that auditory-visual performance at all three SNRs was always better than auditory-only, despite the fact that the absolute SNRs for auditory-visual conditions were lower than the equivalent auditory-only conditions, by an average of 5dB.  The authors interpreted this finding as strong support for the benefit of visual cues for speech recognition in adverse conditions.

When the effects of directionality and SNR were analyzed separately for auditory-only and auditory-visual conditions, they found that directional performance was significantly better than omnidirectional performance for all auditory-visual conditions. In auditory-only conditions, directionality only had a significant effect at the baseline SNR, but not in the baseline +4dB, baseline +8dB or quiet conditions.

Perhaps not surprisingly, Ricketts and his colleagues found that the addition of visual cues offered their severely-impaired listeners a significant advantage for understanding connected speech. When they compared the auditory-only and auditory-visual scores at equivalent SNR levels, they determined that participants achieved an average improvement of 22% with the availability of visual cues. This finding is in agreement with previous research that found a visual advantage of 24% for listeners with moderate hearing loss (Henry & Ricketts, 2003).

Also not surprisingly, performance improved with increases in signal to noise ratio.  For the auditory-only condition, they found an average improvement of 1.6% per dB and 2.7% per dB for the omnidirectional and directional modes, respectively. For the auditory-visual condition, there was an improvement of 3.7% per dB for omnidirectional mode and 3.1% per dB for directional mode.  Furthermore, they found an additional performance increase of 8% for directional mode and 12% for omnidirectional mode when participants were tested in quiet conditions. This was somewhat surprising given previous research based on the articulation index (AI) that suggested maximal performance could be expected at SNRs of approximately +15dB.  The absolute SNR for the baseline +8dB condition was 14.7dB, so further improvements in quiet conditions support the suggestion that hearing-impaired listeners experience increased improvement for SNRs up to +20dB (Studebaker et al, 1999; Sherbecoe & Studebaker, 2002).

The benefit of visual cues was not specifically addressed by this study because it did not compare auditory-only and auditory-visual performance at the same SNR levels. However, the discovery that visual cues improved performance even when the SNRs were approximately 5dB poorer was strong support for the benefit of visual information for speech recognition in noisy environments. This underscores the recommendation that severely hearing-impaired listeners should always be counseled to take advantage of visual cues whenever possible, especially in adverse listening conditions. Although visual cues cannot completely counterbalance the auditory cues lost to hearing loss and competing noise, they supply additional information that can help the listener identify or differentiate phonemes, especially in connected speech containing semantic and syntactic context. In conversational situations, visual cues include not just lip-reading but also the speaker’s gestures, expressions and body language. All of these cues can aid speech recognition, so hearing-impaired individuals as well as their family members should be trained in strategies to maximize the availability of visual information.

Ricketts and Hornsby’s study supports the potential benefit of directional microphones for individuals with severe hearing loss. Many hearing aid users with severe-to-profound loss have become accustomed to the use of omnidirectional microphones and may be resistant to directional microphones, especially automatic directionality, if it is in the primary program of their hearing instruments. One strategy for addressing these cases is to program the hearing aid’s primary memory as full-time omnidirectional while programming a second, manually accessed, memory with a full-time directional microphone. This way the listener is able to choose when and how they use their directional program and may be less likely to experience unexpected and potentially disconcerting changes in perceived loudness and sound quality.

In addition to providing evidence for the benefit of visual cues and directionality, the findings of this study can be extrapolated to support the use of FM and wireless accessories. The fact that performance in quiet conditions was still significantly better than the next most favorable SNR (14.7dB) shows that improving SNR as much as possible provides demonstrable advantages for listeners with severe hearing loss. Even for individuals who do well with their hearing instruments overall, wireless accessories that stream audio directly to the listeners hearing instruments may further improve understanding. These accessories improve SNR by reducing the effect of room acoustics and reverberation, as well as reducing the effect of competing noise and distance between the sound source and the listener. Most modern hearing instruments are compatible with wireless accessories so hearing aid evaluations should always include discussion of their potential benefits. These devices work with a wide range of hearing aid styles, do not require the use of an adapter or receiver boot and are much less expensive than an FM system.

Ricketts and Hornsby’s study underscores the importance of visual information and directionality for speech recognition in noisy environments and illuminates ways in which clinicians can help patients with severe-to-profound loss achieve improved communication in everyday circumstances. Modern technologies such as directional processing and wireless audio streaming accessories can be effective tools for improving SNRs in everyday situations that may otherwise challenge or overwhelm the listener with severe to profound hearing loss.


Erber, N.P. (1969). Interaction of audition and vision in the reception of oral speech stimuli. Journal of Speech and Hearing Research 12, 423-425.

Grant, K.W., Walden, B.E. & Seitz, P.F. (1998). Auditory-visual speech recognition by hearing-impaired subjects: consonant recognition, sentence recognition and auditory-visual integration. Journal of the Acoustical Society of America 103, 2677-2690.

Henry, P. & Ricketts, T. A. (2003).  The effect of head angle on auditory and visual input for directional and omnidirectional hearing aids. American Journal of Audiology 12(1), 41-51.

Killion, M. C., Schulien, R., Christensen, L., Fabry, D. & Revit, L. (1998). Real world performance of an ITE directional microphone. Hearing Journal 51(4), 24-38.

Killion, M.C. & Christensen, L. (1998). The case of the missing dots: AI and SNR loss. Hearing Journal 51(5), 32-47.

MacLeod, A. & Summerfield, Q. (1987). Quantifying the contribution of vision to speech perception in noise. British Journal of Audiology 21, 131-141.

Madison, T.K. & Hawkins, D.B. (1983). The signal-to-noise ratio advantage of directional microphones. Hearing Instruments 34(2), 18, 49.

Pavlovic, C. (1984). Use of the articulation index for assessing residual auditory function in listeners with sensorineural hearing impairment. Journal of the Acoustical Society of America 75, 1253-1258.

Pavlovic, C., Studebaker, G. & Scherbecoe, R. (1986). An articulation index based procedure for predicting the speech recognition performance of hearing-impaired individuals. Journal of the Acoustical Society of America 80(1), 50-57.

Ricketts, T.A. (2000a). Impact of noise source configuration on dire3ctional hearing aid benefit and performance. Ear and Hearing 21(3), 194-205.

Ricketts, T.A. & Henry, P. (2002). Evaluation of an adaptive directional microphone hearing aid. International Journal of Audiology 41(2), 100-112.

Ricketts, T., Henry, P. & Hornsby, B. (2005). Application of frequency importance functions to directivity for prediction of benefit in uniform fields. Ear & Hearing 26(5), 473-86.

Studebaker, G., Sherbecoe, R., McDaniel, D. & Gwaltney, C. (1999). Monosyllabic word recognition at higher-than-normal speech and noise levels. Journal of the Acoustical Society of America 105(4), 2431-2444.

Sherbecoe, R.L., Studebaker, G.A. (2002). Audibility-index functions for the Connected Speech Test. Ear & Hearing 23(5), 385-398.

Sumby, W.H. & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26, 212-215.

Walden, B.E., Busacco, D.A. & Montgomery, A.A. (1993). Benefit from visual cues in auditory-visual speech recognition by middle-aged and elderly persons. Journal of Speech and Hearing Research 36, 431-436.