Starkey Research & Clinical Blog

The real-world benefits of directional microphones with infants and young children

This editorial reviews the published article:

Directional Effects on Infants and Young Children in Real Life: Implications for Amplification
Ching, O’Brien, Dillon, Chalupper, Hartley, Hartley, Raicevich and Hain, 2009

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

The beneficial effect of directional microphone use on adult speech perception in noisy environments is well known and is based on the fact that conversational speech usually takes place with participants facing each other. Reducing the level of competing sound behind the listener, even slightly, can increase the signal-to-noise ratio (SNR), resulting in improved identification and discrimination of speech sounds. Clinical audiologists are accustomed to counseling patients to maintain face-to-face contact whenever possible to get the most benefit from the directional microphones and to take advantage of visual cues as well.

The potential advantage of directional microphone use for children is less understood, partly because children may not employ face-to-face communication as regularly as adults do. Several studies have demonstrated the importance of improved SNR for speech reception in children and it is generally accepted that even children with normal hearing require a greater SNR than adults (Crandell & Smaldino, 2004; Johnstone & Litovsky, 2006). This is particularly true for hearing-impaired children, especially those of a young age. We also know that children are able to orient toward sound sources at a very young age (Ashmead, Clifton & Perrin, 1987; Muir & Field, 1979; Muir, Clifton & Clarkson, 1989), so it follows that directional microphones could potentially improve their speech reception ability in the presence of competing sounds. However, because of concerns about reduced access to non-frontal speech and environmental sounds, audiologists are often reluctant to fit young children and infants with hearing aids equipped with directional settings for fear of detrimental effects on incidental learning.

Ching, O’Brien, Dillon, Chalupper, Hartley, Hartley, Raicevich and Hain (2009) investigated head orientation and the opportunity for young children to benefit from directional hearing aid use in everyday environments. Prior research had shown benefits of directionality in laboratory conditions (Bohnert & Brantzen, 2004; Condie, Scollie, & Checkley, 2002; Kuk, Kollofski, Brown, Melum & Rosenthal, 1999), but it was unknown how directionality would affect speech reception in more typical, naturalistic situations. The goal of the study was twofold: 1) to determine the potential benefit of directionality on reception of speech in naturalistic listening situations, and 2) to examine potentially detrimental effects of directionality on non-frontal sounds.

The authors recruited eleven children with normal hearing and sixteen children with moderate hearing loss between the ages of 11 months and 6.5 years. The children were fitted with behind-the-ear, wide dynamic range hearing aids with directional microphones. None of them had prior experience with directionality in their personal hearing aids.

Video recordings of the children were obtained, in four scenarios that represent everyday situations. Diary entries from parents and caregivers were collected to identify listening situations that could account for approximately 80% of child’s weekly routine. It was hoped that the diary entries could help predict how often the children were likely to be in situations where directionality could be beneficial.

The video recordings of the children in typical listening scenarios were used to evaluate the proportion of time that they were oriented toward primary speech sources. The four scenarios were:

* The child interacting directly with a caregiver in a play situation
* The child NOT interacting directly with adults in the same room
* The child indoors with other children and adults
* The child outdoors with other children and adults

During the recordings, the researchers logged the time during which speech was “present”. Speech was deemed “present” whenever a primary talker could be identified, whether or not they were addressing the child directly.

Video analysis revealed that in the one-to-one situation, the children oriented themselves toward the talker almost 60% of the time. In the remaining group scenarios, the children oriented toward the primary talker between 30-50% of the time, even if they were not being directly addressed by the talker. They were least likely to face the talker in the second scenario, in which adults were present but the child was not engaged in play with adults or other children. Interestingly, age and the presence of hearing loss did not affect the proportion of time that the children spent facing the talker.

Examination of the caregivers’ diaries revealed that the majority of the children’s time was spent on indoor activities, particularly in group situations. Children with normal hearing were slightly more likely to participate in group activities than hearing-impaired children were. Conversely, hearing-impaired children were somewhat more likely than normal-hearing children to participate in one-to-one activities.

Overall, it was determined that directionality had a positive effect on speech reception, because:
* children oriented themselves toward the primary talker more than 50% of the time
* directionality improved SNR for speech in front of the child, especially in group situations
* diary entries showed that the children frequently participated in group activities

It was also determined that directionality is not likely to have detrimental effects on the perception of incidental speech and environmental sounds. The children still oriented themselves to primary speech sources more than 40% of the time, even when talkers were not directly addressing them. Furthermore, the authors pointed out that the changes in SNR were small, which can be enough to have a significant effect on speech reception from the front in the presence of background noise, but is less likely to be enough to affect perception on dominant sound sources from the rear. It follows, then, that directional microphone settings in hearing aids could have benefits for young pediatric hearing aid users by improving the signal-to-noise ratio and therefore the reception of speech information, especially in group situations.

The authors advised that directional hearing aid programs, partly because of inherent decreases in low-frequency gain, might not always be advisable for children, especially in quiet conditions. They recommended the use of directional settings with equalized frequency responses to adjust for the reduction in low-frequency gain and suggested that switchable instruments would be best, to allow for omnidirectional hearing in quiet conditions and directionality in the presence of noise. Because young children and infants are not capable of adjusting hearing aid settings on their own, automatically adjustable instruments were suggested, especially those that can prioritize speech from a dominant talker even from non-frontal directions. Today we have a wide variety of automatically adjustable directional instruments available at a broad range of price points. This, coupled with ongoing improvements in speech enhancement and noise reduction in hearing aid circuitry indicate that clinicians will have even better tools to help hearing-impaired children function in noisy, everyday situations.

The authors underscored the importance of thoroughly counseling caregivers on the effects of directionality in various listening environments. For instance, caregivers should pay attention to the child’s head orientation and positioning and should initiate face-to-face communication at close proximity whenever possible, particularly in noisy situations. Clinical audiologists routinely counsel patients on proper positioning and the importance of face-to-face communication to reduce the effects of background noise on speech perception. Because young, hearing-impaired children rely on better signal-to-noise ratios to receive and process speech information in their everyday activities, and because they may not always orient themselves toward primary speech sources, it is particularly important for their caregivers to understand how they can help maximize the benefit of the child’s directional microphone hearing aids.

Ashmead, D.H., Clifton, R.K. & Perrin, E.E. (1987). Precision of auditory localization in human infants. Developmental Psychology, 23, 641-647.

Bohnert, A., & Brantzen, P. (2004). Experiences when fitting children with a digital directional hearing aid. Hearing Review, 11, 50-55.

Ching, T.Y.C., O’Brien, A., Dillon, H., Chalupper, J., Hartley, L., Hartley, D., Raicevich, & Hain, J. (2009). Directional effects on infants and young children in real life: implications for amplification. Journal of Speech Language and Hearing Research, 52, 1241-1254.

Condie, R.K., Scollie, S.D., & Checkley, P. (2002). Children’s performance: Analog versus digital adaptive dual-microphone instruments. Hearing Review, 9, 40-43.

Crandell, C., & Smaldino, J. J. (2004). Classroom acoustics. In R.D. Kent (Ed.), The MIT Encyclopedia of communication disorders (pp 442-444). Cambridge, MA: The MIT Press.

Johnstone, P.M. & Litovsky, R.Y. (2006). Effect of masker type and age on speech intelligibility and spatial release from masking in children and adults. The Journal of the Acoustical Society of America, 120, 2177-2189.

Kuk, F., Kollofski, C., Brown, S., Melum, A., & Rosenthal, A. (1999). Use of a digital hearing aid with directional microphones in school-aged children. Journal of the American Academy of Audiology, 10, 535-548.

Muir, D., & Field, J. (1979). Newborn infants orient to sounds. Child Development, 50, 431-436.

Muir, D., Clifton, R.K., & Clarkson, M.G. (1989). The development of a human auditory localization response: A U-shaped function. Canadian Journal of Psychology, 3, 199-216.



The effect of digital noise reduction on listening effort: an article review

This article marks the first in a monthly series for

Each month scholarly journals publish articles on a wide array of topics. Some of these valuable articles and their useful conclusions never reach professionals in the clinical arena. The aim of these entries is to discuss research findings and their implications for hearing professionals living a daily clinical routine. Some of these topics may have general clinical relevance, while other may target specific aspects of hearing aids and their application.

This first discussion revolves around an article by authors Sarampalis, Kalluri, Edwards, and Hafter entitled “Objective measures of listening effort: Effects of background noise and noise reduction”. In this 2009 study, the authors pursue the sometimes elusive benefits of digital noise reduction. A review of past literature suggests that digital noise reduction, as implemented in hearing aids, benefits patients through improved sound quality, ease of listening and a possible perceived improvement in speech understanding. Significant improvements in speech understanding are, however, not a routinely observed benefit of digital noise reduction and some studies have shown significant decreases in speech understanding with active digital noise reduction.

In a 1992 article, authors Hafter and Schlauch suggest that noise reduction may lighten a patient’s cognitive load, essentially freeing resources for other tasks. To better understand the proposed effect, imagine driving a car in an unfamiliar area. It’s common for drivers to turn their stereo down, or off, when driving in a demanding situation. This is beneficial, not because music affects driving ability, but because the additional auditory input is distracting, effectively increasing the driver’s cognitive load. By removing the distraction of the stereo, more cognitive resources are freed and the ability to focus, or pay attention to the complex task of driving is improved.

In order to better understand how digital noise reduction may affect attention and cognitive load, two experiments were completed. In the first experiment, research participants were asked to repeat the last word of sentences presented in a background of noise. After eight sentences the listener attempted to repeat as many of the target words as they could. The sentence material contained both high-context and no-context conditions, for example:

High context: A chimpanzee is an ape

No context: She might have discussed the ape

In the second experiment listeners were asked to judge if a random number between one and eight was even or odd, while at the same time listening to and repeating sentences presented in a background of noise. Both experiments incorporated a dual-task paradigm: the first asked participants to repeat select words presented in noise, while also remembering these words for later recall. The second required participants to repeat an entire sentence, presented in noise, while also completing a complex visual task.

Highlights from experiment one show:

  • performance in all conditions decreased as the signal-to-noise ratio became more difficult;
  • overall performance in the no-context conditions was lower than in the high-context conditions;
  • a comparison between performance with and without digital noise reduction showed a significant improvement in recall ability with digital noise reduction

Highlights from experiment two show:

  • performance in all conditions decreased as the signal-to-noise ratio became more difficult;
  • reaction times increased with decreased signal-to-noise ratio;
  • at -6 dB SNR, reaction times were significantly improved with digital noise reduction

The findings of this study show that the cognitive demands of non-auditory tasks, such as visual and memory tasks, inhibit the ability of a person to understand speech-in-noise. In other words, secondary tasks make speech understanding more difficult. Additionally, digital noise reduction algorithms can reduce cognitive effort under adverse listening conditions. The authors discuss the value of using cognitive measures in hearing aid research and speculate that directional microphones may provide a cognitive benefit as well.

The clinical implications of this study suggest that patients may find benefits of wearing hearing aids that go beyond improved speech audibility. Modern signal processing may provide benefits that are only now being understood. For instance, a patient may report that hearing aids have made listening easier, that their new hearing instruments seem to suppress noise more than the old ones, but routine evaluation of speech understanding may not show significant differences between the two hearing aids.

Hearing aid success and benefit has traditionally been defined with the results of speech testing, or questionnaires. If advanced technology can ease the task of listening, patients may be receiving benefits from their hearing aids that we are not currently prepared to evaluate in the clinic. Hopefully, work in this area will continue, increasing our understanding of the role that cognition plays in the success of the hearing aid wearer.

Bentler, R., Wu, Y., Kettle, J., & Hurtig, R. (2008). Digital Noise Reduction: Outcomes from laboratory and field studies. International Journal of Audiology, 47:8, 447-460.

Hafter, E. R., & Schlauch, R. S. (1992). Cognitive factors and selection of auditory listening bands. In A. Dancer, D. Henderson, R. J. Salvi, & R. P. Hammernik ( Eds.), Noise-induced hearing loss (pp. 303–310). Philadelphia: B.C. Decker.

Sarampalis, A., Kalluri, S., Edwards, B., & Hafter, E. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech Language and Hearing Research, 52, 1230-1240.