Starkey Research & Clinical Blog

On the importance of data logging: hearing aid wearers over-report daily use

How reliable are patients’ estimates of their daily hearing aid use?

Solheim, J. & Hickson L. (2017). Hearing aid use in the elderly as measured by data-logging and self-report. International Journal of Audiology, 56, 472-479.

When the 3M MemoryMate hearing aid was introduced in 1987, it was the first hearing aid to measure and record the number of hours the hearing aid was worn in various memories (data-logging). Despite the fact that today’s instruments are capable of capturing a greater, more extensive array of factual information, many clinicians especially value and rely on the information about duration of usage, as it provides hard data that can help in counseling and adjustment to amplification (Bertoli, et al., 2009; Gaffney, 2008; LaPlante-Levesque, Nielsen, Jensen, & Naylor, 2014; McMillan, Durai, & Searchfield, 2017; Stark & Hickson, 2004).

There are two ways to measure how many hours a hearing aid is being used daily: subjectively, in which the patient is asked to estimate the hours the hearing aids have been worn, and objectively, in which the actual hours of usage are recorded by data-logging within the hearing aids.  Studies that have compared self-reported usage to actual usage determined by data-logging have found a systematic discrepancy; the self-reported hours routinely exceed the objective measurements (Gaffney, 2008; Humes, Halling, & Coughlin, 1996; Laplante-Levesque, Nielsen, Jensen, & Naylor, 2014; Maki-Torkko, Sorr, & Laukli, 2001; Taubman, Palmer, Durrant, & Pratt, 1999). Patients consistently report they have worn the hearing aids for longer periods than the data-logging reveals.

The goals of this study were two-fold.  The first goal was to collect hearing aid use data via objective and subjective means for a group of patients over 60 years of age during the first six months following the hearing aid fitting.    The second goal was to evaluate whether or not patient knowledge of a six-month follow-up appointment affected hearing aid use.  To accomplish this second goal, the patients were randomly divided into two study groups: an intervention group in which patients were given an appointment for a six-month follow-up at the time of the hearing aid fitting, and a control group in which no follow-up appointment was discussed.  All patients received the same sequence of fitting procedures, including clearance by a physician, a comprehensive audiological workup, the hearing aid fitting, and a one-month trial before final acceptance into the study. After six-months, both groups were contacted and informed that their hearing aid usage would be documented during the up-coming appointment, but the methodology for collecting this information was not revealed. Subjective data was collected by asking the patients a simple question: What would you estimate your hearing aid use in hours a day for the last six months to be?

The researchers included 93 patients in the intervention group and 88 patients in the control group. The mean age for the entire cohort was 79.2 years, and slightly more than half were women. The average hearing threshold for the better ear was 49.4 dB; 86.2% of participants were fitted bilaterally, and 55.2% were experienced hearing aid users.

The results were reported two ways. First, for all subjects in the study, including those who did not wear the hearing aids at all, the average usage of hearing aids as recorded by data-logging was 6.12 hours per day, while self-reported daily usage was significantly higher, 8.39 hours per day, an approximate difference of two hours. Second, the average usage for subjects who wore the aids at least 30 minutes each day was 7.24 hours daily as recorded by data-logging, while self-reported daily usage was significantly higher, 9.58 hours per day, an approximate difference of two hours. This study, in agreement with the others mentioned above, clearly indicates that there is a substantial, systematic discrepancy between patients’ estimates and the information provided by data-logging.  These studies revealed that patients tend to overestimate hearing aid use by approximately one to four hours.

As expected, the authors identified several factors that predicted increased hearing aid usage: more severe hearing loss, prior hearing aid experience, and increasing age. Gender and number and type of hearing aid(s) worn were unrelated to usage. Regression analyses indicated that the degree of hearing loss was the strongest predictor of hearing aid use whether measured by self-report or by data-logging.

Finally, advance knowledge of the six-month follow-up appointment had no impact; both the intervention and the control group had similar follow-up appointment attendance rates and similar data-logged and self-reported hearing aid usage. The follow-up attendance rate was approximately 75%; based on this, the authors conclude that patients perceive value in attending a follow-up visit.

References

Bertoli, S., Staehlin, K., Zemp, E., Schindler, C., Bodmer, D., & Probst, R. (2009). Survey on hearing aid use and satisfaction in Switzerland and their determinants. International Journal of Audiology, 48, 183-195.

Gaffney, P. (2008). Reported hearing aid use versus data-logging in a VA population. Hearing Review, 6.

Humes, L.E., Halling, D., & Coughlin, M. (1996). Reliability and stability of various hearing- aid outcome measures in a group of elderly hearing-aid wearers. Journal of Speech and Hearing Research, 39, 923-935.

LaPlante-Levesque, A., Nielsen, C., Jensen, L.D., & Naylor, G. (2014). Patterns of hearing aid usage predict hearing aid use amount (data-logged and self-reported) and over-report. Journal of the American Academy of Audiology, 25, 187-198.

Maki-Torkko, E.M., Sorr, M.J., & Laukli, E. (2001). Objective assessment of hearing aid use. Scandinavian Audiology, 30, 81-82.

McMillan, A., Durai, M., & Searchfield, G.D. (2017). A survey and clinical evaluation of hearing aid data-logging: A valued but underutilized hearing aid fitting tool. Speech, Language and Hearing, Published Online.

Stark, P. & Hickson, L. (2004). Outcomes of hearing aid fitting for older people with hearing impairment and their significant others. International Journal of Audiology, 43, 309-398.

Taubman, L., Palmer, C.V., Durrant, J.D., & Pratt, S. (1999). Accuracy of hearing aid use time as reported by experienced hearing aid wearers.  Ear and Hearing, 20, 299-305.

Hearing Aid Use Decreases Perceived Loneliness

Weinstein, B., Sirow, L. & Moser, S. (2016).  Relating hearing aid use to social and emotional loneliness in older adults. American Journal of Audiology 25, 54-61.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Social isolation and loneliness have been linked to increased risk of cognitive decline, cardiovascular disease, increased inflammatory response to stress, depression and other physical and mental health problems (Cacioppo et al., 2000; Hawkley & Cacioppo, 2010; Steptoe et al., 2004).  Estimates suggest that between 10% and 40% of community-dwelling older adults experience social isolation and loneliness, with rural areas having an even higher prevalence of reported loneliness (Nicholson, 2012; Dahlberg & McKee, 2014).

Weinstein and Ventry (1982) were among the first to study the effect of hearing loss on subjective social isolation, finding that self-reported hearing loss was highly correlated with feelings of loneliness and inferiority, reduced interest in leisure activities and withdrawal from others. In a longitudinal study on aging, Pronk and her colleagues found that self-reported hearing loss was associated with increased social and emotional loneliness and they observed that hearing aid users had better scores than non-hearing aid users (Pronk, et al., 2013). These reports raise a number of questions. For instance, if hearing aid use reduces social isolation and loneliness, will associated health problems, such as cognitive decline, be reduced as well? Such an outcome would have widespread implications for the health and well-being of the older population at large.

The goals of the current study by Weinstein and her colleagues were to determine whether first-time hearing aid use reduces social and emotional loneliness. They also examined loneliness in individuals with mild hearing loss and those with moderate to severe hearing loss, before and after intervention with hearing aids, to determine if the effects were dose related.

Forty adults who ranged in age from 62 to 92 years participated in four experimental sessions. At the first session, they completed audiological and speech-in-noise testing, followed by hearing aid selection. Pure tone testing was conducted with standard audiometric procedures and the QuickSIN test (Killion, et al., 2004) was used to evaluate speech recognition in noise. Otoacoustic emission testing was also completed. At the second session, subjects were fitted and trained with binaural hearing aids, real-ear verification measures were conducted and working memory was evaluated with the Reading Span test (Daneman & Carpenter, 1980).  Also at this appointment, the DG Loneliness Scale (DeJong Gierveld & Kamphuis, 1985) was administered, which measures two specific sub-sets of loneliness: emotional loneliness and social loneliness. Subjects returned for a third session one week after the hearing aid fitting and a fourth session at approximately 4-6 weeks after the fitting.

The authors observed a significant decrease in overall loneliness and perceived emotional loneliness after 4-6 weeks of hearing aid use; a reduction in social loneliness that did not achieve statistical significance was also seen.  A sub-group of subjects with more severe hearing loss showed significant decreases in overall loneliness as well as social and emotional loneliness after hearing aid use. This group demonstrated poorer scores pre- and post-fitting, compared to the mild hearing loss group.  There was no significant predictive relationship between age and the measures of social and emotional loneliness and no dose-related effect of hearing loss.  These were not surprising outcomes: health status and functional limitations are more strongly related to social isolation and loneliness than age, and prior studies showed correlations between social isolation/loneliness and perceived hearing loss, as opposed to audiometric thresholds (Hornsby & Kipp, 2016; Pronk et al., 2013).

Subjects were also classified into two groups as “lonely” or “not lonely”, relative to normative data. Prior to hearing aid fitting, 55% were classified as “not lonely” and 45% were classified as “lonely”. After hearing aid use, there was a significant decline in loneliness, with 72.5% of the subjects classified as “not lonely” and 27.5% classified as “lonely”.

The outcomes of this study complement and support our observations as clinical audiologists.  We frequently see the adverse effects of hearing loss on the quality of relationships and social interaction. Hearing loss and subsequent difficulty communicating in groups causes strained conversation and frustration among all participants and increases mental fatigue in the hearing-impaired individual (as reported by Hornsby (2013) and Pronk et al (2013)). This frustration and fatigue often results in avoidance of social interaction.  Therefore, even individuals with a large network of friends and family can experience isolation and loneliness if they struggle to participate in groups or fear that they annoy others with requests for repetition and misinterpretations of conversation.  Most audiologists have heard patients explain that they avoid plays, parties or particular restaurants because they know they will struggle to understand conversation.  Older hearing-impaired adults are even more likely avoid social engagement, because multiple sensory impairments or a decline in cognitive resources may make the use of compensatory strategies like the use of visual cues and context more challenging, thereby increasing frustration and fatigue.

Most clinicians probably discuss social activities and challenges with their new patients in the process of obtaining a detailed initial history. Weinstein and her colleagues suggest that audiologists should also consider implementing a discussion of social network size and a measure of social and emotional loneliness in their evaluation procedures. They suggest the 6-item DG Loneliness Scale, as a brief, yet reliable and valid tool to measure social and emotional loneliness (DeJong Gierveld & Van Tilburg, 2006). It is important to consider both aspects of social activity, as some people with small social networks consider themselves lonely whereas others do not.

Social and emotional loneliness are linked to higher risk of an array of physical and mental health problems, including cognitive decline. Hearing loss is known to increase the risk of social isolation and loneliness, but the results reported by Weinstein and her colleagues suggest that hearing aid use may mitigate this effect, by facilitating more consistent and satisfying social engagement. More study of the potential social and emotional benefits of hearing aid use is needed, especially with regard to how it may reduce the risk of cognitive decline in older adults, by way of a reduction in social and emotional loneliness.

 

References

Cacioppo, J., Ernst, J., Burleson, M., McClintock, M., Malarkey, W., Hawkley, L. & Berntson, G. (2000). Lonely traits and concomitant physiological processes: the MacArthur social neuroscience studies. International Journal of Psychophysiology 35, 143-154.

Dahlberg, L. & McKee, K. (2014). Correlates of social and emotional loneliness in older people: Evidence from an English community study. Aging and Mental Health 18, 504-514.

Daneman, M. & Carpenter, P. (1980). Individual differences in working memory and reading. Journal of Verbal Learning and Verbal Behavior 19, 450-466.

DeJong Gierveld, J. & Kamphuis, F. (1985). The development of a Rasch-type loneliness scale. Applied Psychological Measurement 9, 289-299.

DeJong Gierveld, J. & Van Tilburg, T. (2006). A 6-item scale for overall, emotional and social loneliness. Research on Aging 28, 582-598.

Hawkley, L. & Cacioppo, J. (2010). Loneliness matters: A theoretical and empirical review of consequences and mechanisms. Annals of Behavioral Medicine 40, 218-227.

Hawthorne, G. (2008). Perceived social isolation in a community sample: Its prevalence and correlates with aspects of peoples’ lives. Social Psychiatry and Psychiatric Epidemiology 43, 140-150.

Hornsby, B. (2013). The effects of hearing aid use on listening effort and mental fatigue associated with sustained speech processing demands. Ear and Hearing 34 (5), 523-534.

Hornsby, B. & Kipp, A. (2016). Subjective ratings of fatigue and vigor in adults with hearing loss are driven by perceived hearing difficulties not degree of hearing loss. Ear and Hearing 37 (1), 1-10.

Killion, M., Niquette, P., Gudmundsen, G., Revit, L. & Banerjee, S. (2004). Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal hearing and hearing-impaired listeners. The Journal of the Acoustical Society of America 116, 2395-2405.

Lin, F.  (2011). Hearing loss and cognition among older adults in the United States. The Journals of Gerontology A: Biological Sciences and Medical Sciences 66 (10), 1131-1136.

Lin, F., Yaffe, K., & Xia, J. (2013). Hearing loss and cognitive decline in older adults. Journal of the American Medical Association Internal Medicine 173 (4), 293-299.

Nicholson, N. (2012). A review of social isolation. The Journal of Primary Prevention 33, 137-152.

Perlman, D. (1987). Further reflections on the present state of loneliness research. Journal of Social Behavior and Personality 2, 17-26.

Pronk, M., Deeg, D. & Kramer, S. (2013). Hearing status in older persons: A significant determinant of depression and loneliness? Results from the Longitudinal Aging Study Amsterdam. American Journal of Audiology 22, 316-320.

Steptoe, A., Owen, N., Kunz-Ebrecht, S. & Brydon, L. (2004). Loneliness and neuroendocrine, cardiovascular and inflammatory stress responses in middle-aged men and women. Psychoneuroendocrinology 29, 593-611.

Weinstein, B. & Ventry, I. (1982). Hearing impairment and social isolation in the elderly. Journal of Speech and Hearing Research 25, 593-99.

Weinstein, B., Sirow, L. & Moser, S. (2016).  Relating hearing aid use to social and emotional loneliness in older adults. American Journal of Audiology 25, 54-61.

A Digital Finger on a Warm Pulse: Wearables and the future of healthcare

 

Taken together, blood pressure, glucose and oxygenation levels, sympathetic neural activity (stress levels), skin temperature, level of exertion and geo-location provide a very informative, in-the-moment picture of physiological status and activity. All provided today by a clinical grade smart monitors used in medical research projects around the world. Subtle changes in patterns over time can provide very early warnings of many disease and dysfunctional states (see this article in the Journal Artificial Intelligence in Medicine).

It is well established that clinical outcomes are highly correlated with timely diagnosis and efficient differential diagnosis. In the not too distant future your guardian angel is a medic-AI using machine learning to individualize your precise clinical norms matched against an ever-evolving library of norms harvested from the Cloud. You never get to your first cardiac event because you take the advice of your medic-AI and make subtle (and therefore very easy) modifications to your diet and activity patterns through your life. If things do go wrong, then the paramedics arrive well before your symptoms! The improvements in quality of life and the savings in medical costs are (almost) incalculable. This is such a hot topic in research at the moment that Nature had a recent special news feature on wearable electronics.

There are, however, more direct ways in which your medic-AI can help manage your physiological status. Many chronic conditions today are dealt with using embedded drug delivery systems, but they need to be coupled with periodic hospital visits for blood tests and status examinations. Wirelessly connecting your embedded health management system (which includes an array of advanced sensors) to your medic-AI avoids all that. And in fact, the health management system can be designed to ensure that a wide range of physiological parameter remain within their normal ranges despite the occasional healthy living lapse of its host.

For me as a neuroscientist, the most exciting developments in the areas of sensor technology are in the ambulatory measurement of brain activity. Recent work in a number of research laboratories have used different ways to measure the brain activity of people listening to multiple talkers in conversations, not unlike the cocktail party scenario. What they have found is nothing short of amazing. Using relatively simple EEG recordings with scalp electrodes and the audio streams of the concurrent talkers together with rather sophisticated machine learning and decoding, these systems are able to detect which talker the listener is attending to. Some research indicates that not only the person but the spatial location can be decoded from the EEG signal and that this process is quite resistant to acoustic clutter in the environment.

This is a very profound finding as it shows how we can follow the intention of the listener in terms of how they are directing their attention and how this varies over time. This provides important information that we can use to direct the signal processing produced by the hearing aid to focus on the spatial location of the listeners and to enhance the information being processed that the listener wants to hear – effectively defining for us what is signal and what is noise when the environment is full of talkers of which only one is of interest at any particular instance in time.

Other very recent work has been demonstrating just how few EEG electrodes are needed to get robust signals for decoding once the researchers know what to look for. Furthermore, the recordings systems themselves are now sufficiently miniaturized so that these experiments can now be performed outside the laboratory while the listeners are actually engaged in real-world listening activities. One group of researchers at Oxford University actually have their listeners cycling around the campus while doing the experiments!

These developments demonstrate that the bio-sensors necessary are sufficiently mature in principal to cognitively control signal processing to produce targeted hearing enhancement. This scenario also provides a wonderful example of how the hearing instrument can share the processing load depending on the time constraints of the processing. The decoding of the EEG signals will require significant processing but this processing is not time dependent – a few 100 ms is neither here nor there – a syllable or two in the conversation. The obvious solution is that the Cloud takes the processing load and then sends the appropriate control codes back to the hearing aid either directly or via its paired smartphone. As the smartphone is also listening into the same auditory scene as the hearing aid, it can also provide another access point for sound data that could also provide additional and timelier processing capability for other more time critical elements.

But no one is going to walk around wearing an EEG cap with wires and electrodes connected to their hearing aid. A lot of sophisticated industrial design goes into a hearing aid but integrating such a set of peripheral so that they are acceptable to wear outside the laboratory could well defeat the most talented designers. So how do we take the necessary technology and incorporate it into a socially, friendly and acceptable design? We start by examining developments in the world-wide trend of wearables and examine some mid-term technologies that could well play into the market as artistic and symbols of status.

 

Sources:

 

 

The Power of the Cloud

 

In “The Fabric of Tomorrow,” I laid out a rather high level road map for the ensuing discussion. Now it is time to start digging a bit more into the details and more importantly, understanding how these developments can be leveraged effectively by what we do at Starkey Research.

Let’s start with the Cloud! First the inputs: Ubiquitous computing and seamless interconnectivity are like the peripheral nervous system to the Cloud. Through them, the Cloud receives all its sensory data – the “afferent” information about the world. Data that covers so many more realms than that of the human senses and with a precision and rate that eclipses the sum of all information in previous human history.

Second the outputs: This peripheral nervous system also takes the “efferent” signal from the Cloud to the machines and the displays that will effect the changes in the world – the physical, the intellectual and the emotional worlds we inhabit. We will come back to the peripheral nervous system and its sensors and effectors later – for the moment let’s focus on the Cloud.

People’s expectations and predictions about technology are replete with fails seen in predictions like:

“I think there is a world market for maybe five computers.” – Thomas Watson, chairman of IBM, 1943

“There is no reason anyone would want a computer in their home.” – Ken Olson, president, chairman and founder of Digital Equipment Corp., 1977.

“640K ought to be enough for anybody.” – Attributed to Bill Gates, 1981.

The future is indeed, very hard to foresee. On the other hand, for what we do in Starkey Research, we need to temper our enthusiasm or optimism to properly position our work to deliver in 5 or 10 years time into the real and not the imagined. In contrast to the unbridled excitement of Ray Kerzwiel’s visions of the future, in Starkey Research we have to build and deliver real things that solve real problems!

So with those cautions in mind, what can we say about the Cloud? Electronics Magazine solicited an article from Gordon Moore in 1965 where he made the observation and prediction that the number of components on an integrated circuit board would continue to double each year for at least the next 10 years (he later revised the doubling period to two years). Dubbed by Carver Mead as “Moore’s law”, this came to represent not just a prediction about the capacity of chip foundries and lithographers to miniaturise circuits but a general rubric for improvements in computing power (i.e. Moore’s Law V2.0 & V3.0).

The Cloud, while still based on the chips described by Moore’s law, presents as a virtually unlimited source of practical computing power. The single entity computational behemoths will likely live on in the high security compounds of the world’s defense and research agencies, but for the rest of us, server farms provided by Amazon (AWS), Google (GCE), Windows (Azure) and the like can provide a virtually unlimited source of processing power. No longer are we tied to the capacity of the platform we are using. As long as that platform can connect to the Cloud then the device can share its processing needs with this highly scalable service.

But this comes at a price and that price is time. Although fast, network communications have delays that relate to the switching and routing of the message packets, the request itself is queued and the processing itself takes a finite interval of time before the results are sent back along the network to the requesting device.  At the moment, with a fast network and a modest processing request, the time taken amounts to about the time it takes to blink (~350 ms). For hearing technology this is a very important limitation as the ear is exquisitely sensitive to changes over time. For instance, when sounds are taken in through the ear, there is a delay between processing and comprehension, a delay that can detrimentally influence not only how the sound is interpreted but also a person’s ability to successfully maintain a conversation. This means that we need to find ways to locally processes those elements that are time sensitive and to off-load those processes where a hundred milliseconds or so are not important.

Of course the Cloud is more than just processing power, it also represents information – or more correctly data. Estimating, let alone comprehending, the amount of data currently transmitted across this peripheral nervous system and potentially stored in the Cloud is no mean feat. It requires the use of numbers that are powers of 1000 (terabyte 10004; petabyte 10005; exabyte 10006; zettabyte 10007 and so on). An estimate of traffic can be derived from Cisco’s published forecast figures in 2013 for 2012–17 which indicate that the annual global IP traffic will pass the zettabyte threshold by the end of 2016 and by 2017 global mobile data traffic will reach 134 exabytes annually; growing 13-fold from 2012 to 2017. As for storage, estimate place Google’s current storage at between 10-15 exabytes and Google is but one of the players here – it would be very difficult to determine, for instance, the storage capability of the NSA and other worldwide governmental agencies.

Of course these numbers are mindboggling and there is a point where the actual numbers really don’t add anything more to the conversation. This is just Big Data! What these imply however, is that a whole new range of technologies and tools need to be developed to be able to manipulate these data to derive information. Big Data and Informatics in general have huge implications for the way we conceive how we manage hearing impairment and deliver technologies to support listening in adverse environments.

 

 

 

Another piece in the puzzle of hearing aid use and cognitive decline

Amieva, H., Ouvrard, C., Giulioli, C., Meillon, C., Rullier, L. & Dartigues, J.F. (2015) Self-Reported Hearing Loss, Hearing Aids and Cognitive Decline in Elderly Adults: A 25-Year Study. Journal of the American Geriatric Society 63 (10), 2099-2104.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Of individuals age 65 or older 30% will have some hearing loss, among those of age 85 or older this proportion is estimated at 70-90% (Chien & Lin, 2012; Weinstein, 2000). Many individuals with hearing loss go without hearing aids, if causal linkage exists between increased risk of cognitive decline or dementia due to untreated hearing loss the implications are of meaningful concern for a large population of older adults. These factors have motivated a swell of interest in relationships among declining of hearing ability, cognition, and memory for our aging population.

Though the ways in which hearing loss is related to cognition and memory deficits are not fully understood, recent evidence suggests that hearing loss may have a meaningful relationship to increased risk of cognitive decline (Deal et al., Lin et al., 2011; Ohta et al., 1981; Granick, et al., 1976; Lindenberger & Baltes, 1994). Some reports also suggest that treatment of hearing loss with the use of hearing aids may slow the progression of cognitive decline, though more study is needed to support this proposition (Valentijn et al., 2005; Lin et al., 2013; Deal et al., 2014).  It is known, however, that hearing loss increases social isolation in the elderly (McCoy et al., 2005; Tun et al., 2009; Weinstein & Ventry, 1982) and social isolation is in turn linked to increased cognitive decline. Whether hearing loss has a direct or circuitous connection to cognitive decline and whether treating hearing loss can slow the rate of cognitive decline is still in question. The purpose of the study reviewed here was to examine self-reports of hearing loss and compare the rates of cognitive decline, or cognitive trajectories among normal hearing and hearing-impaired subjects, and among those who wear hearing aids and those who do not.

Amieva and colleagues completed an analysis of 3,670 subjects, age 65 or older who were participating in a French longitudinal study of aging and the brain. The study began 25 years ago with an initial neuropsychological evaluation, indices of dependency, depression and social interactions, as well as a brief questionnaire about hearing loss. Subsequent visits took place at 2-3 year intervals after the initial visit and again included tests of cognitive performance and complaints, functional ability and symptoms of depression, as well as questions about social interactions and pharmaceutical use. The Mini-Mental State Examination (MMSE; Folstein et al., 1975) was used as a measure of global cognitive performance. To gauge self-perceived hearing loss, subjects were asked “Do you have hearing trouble?” and were instructed to choose one of 3 responses:

1.  “I do not have hearing trouble.”

2.  “I have trouble following conversation with two or more people talking at the same time or in a noisy background.”

3.  “I have major hearing loss.”

In addition to the inquiry about perceived hearing loss, participants were asked if they had a hearing aid.

Participants were divided into three groups based on perceived hearing loss: 2,394 (65%) subjects reported no hearing trouble, 1139 (31%) reported difficulty in groups or noise and only 137 (4%) reported major hearing loss. To examine the effect of hearing loss on the cognitive trajectories, subjects were divided into only two groups: those without perceived hearing loss and those who reported either moderate or severe hearing loss. Of the 1276 subjects who reported hearing loss, 150 used hearing aids. Of the 150 hearing aid users, 89 had self-reported moderate loss and 61 had self-reported severe loss.

Data analysis was comprised of three statistical models. The first model examined the relationship between hearing loss and cognitive decline. After controlling for age, gender and education, the investigators found that hearing loss was significantly related to lower scores on the MMSE and greater decline in cognitive performance over 25 years. The second statistical model examined the relationships among hearing loss, hearing aid use and cognition. At the baseline appointment, both hearing-impaired groups (moderate and major hearing loss) had lower scores on the MMSE than did the subjects with no reported hearing loss. Over the 25 years following the initial visit, there was a significant difference in the rate of cognitive decline between the group of hearing impaired individuals who did not wear hearing aids and the subjects with no reported hearing loss. In contrast, the individuals who did wear hearing aids showed no difference in cognitive trajectory from normal-hearing subjects.  A third statistical model examined hearing aids, hearing loss and cognition, while controlling for several other variables: comorbidities, dependency, dementia and psychotropic drug use. After these factors were controlled, there was no longer a significant difference between the cognitive trajectories of the sub-groups of hearing impaired subjects.

The current study is in agreement with previous reports of a relationship between hearing loss and increased rates of cognitive decline (Lin, 2011; Lin et al., 2013; Deal et al., 2015).  Of particular note is that the individuals who wore hearing aids had similar rates of cognitive decline to normal hearing individuals and slower trajectories than hearing impaired subjects who did not wear hearing aids; this significant difference based on hearing loss disappeared when other variables including depression were controlled. The authors point out that hearing loss has been associated with depression and social isolation in previous studies (Kiely et al., 2013; Li et al., 2014) and that these factors may be the mediate the relationship between hearing loss and cognitive decline.  In other words, the findings of the current study suggest that there may not be a direct relationship between hearing loss and cognitive decline.

It is important to note is that this study used self-report as the measure of hearing loss and hearing aid use. The self-report technique was likely a less expensive and more logistically feasible option, given the magnitude of the study. Additionally, self-reported hearing loss was only measured at the initial visit, so the subjects’ progression of hearing loss was unknown. With particular relevance to the current discussion, cognitive status may indeed affect a person’s perceived ability to communicate in daily activities, particularly in noise. However, individuals who experience difficulty functioning in noise due to cognitive or memory constraints may or may not have elevated pure tone thresholds. Therefore, the self-report measurement may not represent actual hearing loss but could instead reflect other subject characteristics. If audiometric testing is not done, it is unclear how hearing loss may affect performance on measures of cognition.

The evidence presented by Amieva  adds mild insight to our collective understanding of the relationships between hearing status and cognitive ability. Caution must still be maintained when suggesting that treatment of hearing loss may slow or attenuate cognitive decline. Deeper understanding will require additional longitudinal studies with thorough diagnostic routines and randomized, controlled experimental designs. Thankfully this work is underway at universities and hospitals in the United States and Europe. Some pilot outcomes were reviewed in an earlier blog and are available in the original article.

References

Amieva, H., Ouvrard, C., Giulioli, C., Meillon, C., Rullier, L. & Dartigues, J.F. (2015) Self-Reported Hearing Loss, Hearing Aids and Cognitive Decline in Elderly Adults: A 25-Year Study. Journal of the American Geriatric Society 63 (10), 2099-2104.

Chien, W. & Lin, F. (2012). Prevalence of hearing aid use among older adults in the United States. Archives of Internal Medicine 172, 292-293.

Deal, J., Sharrett, A., Albert, M., Coresh, J., Mosley, T., Knopman, D., Wruck, L. & Lin, F. (2015). Hearing impairment and cognitive decline: A pilot study conducted within the Atherosclerosis Risk in Communities Neurocognitive Study. American Journal of Epidemiology 181 (9), 680-690.

Desjardins, J. & Doherty, K. (2013). The effect of hearing aid noise reduction on listening effort in hearing-impaired adults. Ear and Hearing 35(6), 600-610.

Ferrite, S., Sousa-SantanaII, V. & Marshall, S. (2011). Validity of self-reported hearing loss in adults: performance of three single questions , Revista de Saúde Pública 45(5), 824-30

Folstein, M., Folstein, S. & McHugh, P. (1975). “Mini Mental State”, a practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research 12, 189-198.

Granick, S., Kleban, M. & Weiss, A. (1976). Relationships between hearing loss and cognition in normally hearing aged persons. Journal of Gerontology 31, 434-440.

Kiely, K., Anstey, K. & Luszcz, A. (2013). Dual sensory loss and depressive symptoms: the importance of hearing, daily functioning and activity engagement. Frontiers in Human Neuroscience 7, 837.

Li, C., Zhang, X. & Hoffman, J. (2014). Hearing impairment associated with depression in U.S. adults. National Health and Nutrition Examination Survey 2005-2010. Journal of the American Medical Association, Otolaryngology, Head and Neck Surgery 140, 293-302.

Lin, F.  (2011). Hearing loss and cognition among older adults in the United States.   A: Biological Sciences and Medical Sciences 66 (10), 1131-1136.

Lin, F. & Albert, M. (2014). Hearing loss and dementia – who is listening? Aging and Mental Health 18(6), 671-673.

Lin, F., Ferrucci, L. & Metter, E. (2011). Hearing loss and cognition in the Baltimore Longitudinal Study of Aging. Neuropsychology 25(6), 763-770.

Lin, F., Yaffe, K., & Xia, J. (2013). Hearing loss and cognitive decline in older adults. Journal of the American Medical Association Internal Medicine 173 (4), 293-299.

Lindenberger, U & Baltes, P. (1994). Sensory functioning and intelligence in old age: a strong connection. Psychology of Aging 9, 339-355.

McCoy, S.L., Tun, P.A. & Cox, L.C. (2005). Hearing loss and perceptual effort: downstream effects on older adults’ memory for speech. Quarterly Journal of Experimental Psychology A, 58, 22-33.

Mick, P., Kawachi, I. & Lin, F. (2014). The association between hearing loss and social isolation in older adults. Otolaryngology Head Neck Surgery 150(3), 378-384.

Ohta, R., Carlin, M. & Harmon, B. (1981). Auditory acuity and performance on the mental status questionnaire in the elderly. Journal of the American Geriatric Society 29, 476-478.

Tun, P., McCoy, S. & Wingfield, A. (2009). Aging, hearing acuity and the attentional costs of effortful listening. Psychology and Aging 24(3), 761-766.

Uhlmann, R., Larson, E. & Koepsell, T. (1986). Hearing impairment and cognitive decline in senile dementia of the Alzheimer’s type. Journal of the American Geriatrics Society 34, 207-210.

Uhlmann, R., Larson, E., Rees, T., Koepsell, T. & Duckert, L. (1989). Relationship of hearing impairment to dementia and cognitive dysfunction in older adults. Journal of the American Medical Association 261, 1916-1919.

Valentijn, S., Van Boxtel, M. & Van Hoore, S. (2005). Change in sensory functioning predicts change in cognitive functioning. Results from a 6-year follow-up in the Maastricht Aging Study. Journal of the American Geriatric Society 53, 374-380.

Weinstein, B. & Ventry, I. (1982). Hearing impairment and social isolation in the elderly. Journal of Speech and Hearing Research 25, 593-99.

The Fabric of Tomorrow

In Informed Dreaming, we explored the impact of the rate of scientific discovery and technology change on research in general and on hearing aid research in particular. From here will begin to look more closely at how some of that change will manifest itself in the everyday technologies of tomorrow. So let’s précis that roadmap.

There are two main technological forces in this story – computing power and connectivity. These are quite literally the backbone from which many other profoundly influential players will derive their power. If there was only one dominant idea, it would be ubiquitous computing – a term coined by the brilliant computer scientist Mark Weiser in 1991 in his influential Scientific America article “The Computer for the 21st Century.” As head of computer science at Palo Alto Research Center in Palo Alto, he envisaged a future where our world was inextricably interwoven with a technological fabric of networked “smart” devices. Such a network has the capability to manage our environments from the macro down to a detailed, individualized level – everything from the power grid to the time and temperature of that morning latte.

But these devices are also inputs to the system – detectors and sensors feeding a huge river of information into the central core, or cloud as we now know it. Many of these are already worn by people (mobile phones, smart watches, activity monitors etc. all uploading to the cloud) and the sophistication and bio-monitoring capability of these wearables is increasing by the week. Moreover, many of these sensors are stationary but have highly detailed knowledge about their transactions – cashless transactions record the person the time the place and the goods, tapping on and off public transport, taking a taxi, an Uber, a flight, a Facebook post, street closed-circuit television security systems, your IP address, cookies and the browser trail, etc.

Notwithstanding the issues of privacy (if indeed that still exists), this provides an inkling of the data flowing into the cloud – no doubt only the very tip of this ginatic iceberg. Big Data is here and it is here to stay, and although Google is King, these particular information technologies are but babies.

I was fortunate enough to attend the World Wide Web conference in 1998 where Tim Berners-Lee, the man who invented the World Wide Web while working at CERN in 1989, began promoting the idea of the Semantic Web – a means by which machines can efficiently communicate data between each other. In the ensuing years, much work has gone into developing the standards and implementing the systems. In that time however, two other massive developments have also occurred that may overshadow or subsume these efforts: On the one hand – natural language processing has matured using both text and audio in the forms of Siri, Google Talk and Cortana to mention just a few. On the other hand, driven by huge strides in cognitive neuroscience, processing power and advanced machine learning, we are witnessing a rebirth of Artificial Intelligence (AI) and the promise of so-called Super Intelligence.

So just how can we design listening (hearables) technologies, hearing aids in particular, that can capitalize on these profound developments? Well, let’s take a sneak peek at what a future hearing aid might look like in this brave new world.

Imagine a hearing aid that can listen to the world around the wearer and break down that complex auditory scene into the key relevant pieces – sorting the meaningful from the clutter. A hearing aid that can also listen into the brain activity of the listener and identify the wearer’s focus of attention and enhance the information from that source as it is coded by the brain. A hearing aid that in fact, is not a hearing aid but a device that people wear all the time as a communication channel for other people and machines, for their entertainment, as a brain and body monitor that also maps their passage through space. Such a device provides support in adverse listening conditions to the normally hearing and the hearing impaired alike – it simply changes modes of operation as appropriate.

Possibly the most surprising thing about this scenario is that, in advanced research laboratories around the world (including Starkey Research), the technologies that would enable such a device exist RIGHT NOW. Of course they are not developed to provide the level of sophisticated sensing and control that are required to give life to this vision, nor are they in a form that people can put in their ears. But they do exist and if we have learned anything from watching the progress of science and technology over the last few decades, their emergence as the Universal Hearable Version 1.0 will likely happen even sooner than we might sensibly predict from where we now stand.

 

Modern Remote Microphones Greatly Improve Speech Understanding in Noise

Rodemerk, K. & Galster, J. (2015).  The benefit of remote microphones using four wireless protocols. Journal of the American Academy of Audiology, 1-8.

Wireless hearing aids have made remote microphones more accessible, affordable, and easier to use. As a result, use of these systems has become more common. Most hearing aid developers now offer remote microphones that transmit at different wireless frequencies than the comparatively traditional FM system. Some of these system pair directly with hearing aids via 900MHz or 2.4GHz wireless protocols, whereas others communicate via a receiver boot that is physically attached to the hearing aids, or an intermediate device that is worn around the neck or on the lapel, most of these intermediate act as a relay that receives a Bluetooth audio signal from the remote microphone, translating it to a wireless signal that can be received by the hearing aid. The goal of all of these systems is to provide the benefits of a clean speech input; including the ability to overcome distance, reverberation and noise to provide a consistently high-quality speech signal to the listener.

The purpose of the current study was to compare the performance of four commercially available hearing aid/remote microphone systems and to assess their benefits for hearing aid users.  Sixteen hearing-impaired individuals participated in the study. There were ten females and six males and their mean age was 68.5 years with a range from 52-81 years. All subjects had bilateral, symmetrical, sensorineural hearing loss. Ten participants were experienced hearing aid users and six were non-users, though hearing aid experience was not specifically examined in this study.

For the purposes of the study, participants were fitted with three bilateral sets of hearing aids from three different hearing aid manufacturers, paired with four different remote microphone systems. One set of aids communicated directly with a remote microphone via a 900MHz signal, another set communicated directly with the remote microphone via a 2.4GHz signal. The third pair of aids worked with either an FM remote microphone transmitter and FM receiver boot or a remote microphone used with an intermediate Bluetooth receiver that transmitted information to the hearing aids via a magnetic wireless protocol: this set of hearing aids was used in two of the four remote microphone conditions in this study.

Speech recognition was assessed using the HINT test (Nilsson et al., 1994). The HINT sentences were presented with continuous, 55dB speech-shaped noise, delivered through four speakers surrounding the listener at 45, 135, 225 and 315 degrees. Sentence stimuli were presented at a 0-degree azimuth, at levels that were systematically varied to arrive at the level required to achieve a 50% correct score. Twenty sentences were presented in each listening condition and the order of manufacturer and listening conditions were randomized for each participant. Each listening condition was assessed at two talker-listener distances; with the listener seated 6 feet away from the talker loudspeaker and again at 12 feet away from the loudspeaker.

Speech recognition was assessed under four listening conditions:

1.         Unaided

2.         Hearing aid only – omnidirectional

3.         Remote microphone only (hearing aid microphones off)

4.         Remote microphone plus hearing aid microphones (equal contribution from remote and HA microphones)

For the remote microphone only conditions, all four remote microphone systems yielded speech recognition scores that were 11-15dB better than unaided and hearing aid only conditions. There were no significant differences among the four remote microphone systems. This pattern of results was consistent when the listener was seated six feet and twelve feet from the loudspeaker.

Similar results were found for the remote microphone plus hearing aid conditions, in that all four remote microphone conditions were better than the unaided or hearing aid alone conditions. However, only three of the four hearing aid/remote microphone systems were comparable to each other in this condition: the FM, Bluetooth, and 900MHz models. The 2.4GHz model yielded significantly poorer scores than the other systems when the hearing aid microphone was used in combination with the remote microphone. As in the remote microphone only condition, results for the remote microphone plus hearing aid condition were comparable for the listening distances of 6 feet and 12 feet.

All four of the remote microphone systems evaluated in this study improved speech recognition scores from 6 to 16dB, a range comparable to previous reports of performance with FM systems (Hawkins, 1984; Boothroyd, 2004; Lewis, 2008). These results indicate that hearing aid users who experience difficulty understanding speech in noisy environments could expect benefit from any of the systems that were evaluated in this study.  The talker-listener distances examined here are comparable to those examined in previous studies and represent typical situations in which hearing aid users might listen to other conversational participants in everyday situations.

This study showed that when the hearing aid microphone was turned on, providing equal contribution to the remote microphone, the speech recognition benefit was less than that measured with the remote streaming microphone alone, though there was still a significant improvement over unaided and hearing aid only conditions.  This is in agreement with previous studies that reported decreased FM benefit when the hearing aid microphone level was equal to the FM microphone, as compared to FM alone (Boothroyd & Iglehart, 1998). However, many remote microphones allow the hearing aid microphone level to be adjusted in the software. The optimal hearing aid microphone attenuation for remote microphone use requires further examination and may vary with environment and each patients goals for listening.

This study provides compelling support for the benefits of remote microphone systems and lays the groundwork for further examination of remote microphones and how they interact with hearing aid programming parameters and a variety of acoustic environments. Of clinical note was the fact that the research audiologists supporting data collection quickly learned the importance of counseling for successful use of remote microphones. For instance, it was apparent that many participants expected table top placement of a remote microphone would yield benefits similar to those experienced when the remote microphone was place near the talker’s mouth. This point of confusion was clarified through live demonstration of the remote microphone at the time of fitting, during which they will clearly hear that talker’s voice becomes much quieter as the remote microphone is moved away from the talker’s mouth. The remote microphone can be an extremely useful tool but prescription must be accompanied by sufficient counseling and in-office demonstration time.

 

References

Boothroyd, A. (2004). Hearing aid accessories for adults: the remote FM microphone. Ear and Hearing 25 (1), 22-23.

Boothroyd, A. & Iglehart, F. (1998). Experiments with classroom FM amplification. Ear and Hearing 19 (3), 202-217.

Hawkins, D. (1984). Comparisons of speech recognition in noise by mildly-to-moderately hearing-impaired children using hearing aids and FM systems. Journal of Speech and Hearing Disorders 49(4): 409-418.

Lewis, D. (2008). Trends in classroom amplification. Contemporary Issues in Communication Sciences and Disorders 35, 122-132.

Nilsson, M., Soli, S. & Sullivan, J. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95(2), 1085-1099.

Rodemerk, K. & Galster, J. (2015).  The benefit of remote microphones using four wireless protocols. Journal of the American Academy of Audiology, 1-8.

Listening is more effortful for new hearing aid wearers

Ng, E.H.N., Classon, E., Larsby, B., Arlinger, S., Lunner, T., Rudner, M., Ronnberg, J. (2014). Dynamic relation between working memory capacity and speech recognition in noise during the first six months of hearing aid use. Trends in Hearing 18, 1-10.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Numerous studies have illustrated the relationship between working memory, cognitive resources and speech perception and suggest that listeners with limited working memory or cognitive resources are more likely to struggle with speech recognition in noise (Gatehouse, et al., 2003; Lunner, 2003). Conversely, larger working memory capacity may allow more rapid and successful matching between speech inputs and stored lexical templates.  This concept is described by the Ease of Language Understanding (ELU) model, which proposes that cognitive processing demands vary according to the degradation of the speech signal in different environments (Ronnberg, 2003; Ronnberg et al., 2008). In quiet, favorable listening conditions, speech inputs are easily matched to stored representations and the processing is automatic. In difficult listening environments, more explicit processing is required to match inputs to stored representations. How efficiently this goal is achieved is dependent upon working memory capacity.

Using these concepts as underpinning, Ng and her colleagues proposed that working memory and cognitive processing may have more of an impact on speech recognition for new hearing aid users than for experienced hearing aid users. Hearing aids improve speech audibility and directional microphones and noise reduction can help preserve speech in adverse listening conditions, which should reduce the need for explicit working memory processing. However, if phonological representations stored in memory have been degraded by hearing loss over time, amplified speech perceived by new hearing aid users will not match their stored templates. Therefore, more explicit processing in working memory may be required to identify words. Over time, as the individual becomes acclimated to the amplified sound, stored templates may adapt and become more similar to their acoustic counterparts, reducing the working memory and cognitive load requirements for correct identification. Following this reasoning, Ng and her colleagues proposed that there would be a significant relationship between cognitive functioning and speech recognition in new, first-time hearing aid users but that the relationship would become weaker over time as stored speech representations based on amplified sound become more established.

To examine this hypothesis, 27 first-time hearing aid users were recruited from a pool of subjects at a Swedish university Audiology clinic. All had mild to moderately-severe sensorineural hearing loss and no previous experience with hearing aids. Nine of the subjects were fitted monaurally and 18 were fitted binaurally. Four participants had in-the-ear or canal instruments and 23 had behind-the-ear instruments. Most of the subjects became full-time hearing aid users and the rest were consistent, part-time users.

Approximately four months prior to being fitted with their hearing aids, subjects attended an experimental session at which they completed speech recognition in noise and cognitive testing. Four cognitive tests were administered: the Reading Span test, a physical matching task, a lexical decision making task and a rhyme judgment test. The Swedish version of the Reading Span test was used to assess working memory or listeners’ ability to process and store verbal information in a parallel task design (Ronnberg et al., 1989). After hearing a list of sentences, subjects were asked to recall either the first or final word of each sentence in the list.  The test was scored according to the total number of words correctly recalled. The physical matching test (Posner & Mitchell, 1967), which measured general processing speed, required participants to judge whether two examples of the same letter were visibly identical or different in physical shape (e.g., A-A vs. A-a). Scores were based on reaction time for correct trials. The lexical decision making task required subjects to judge whether a string of 3 letters presented on a screen was a real Swedish word. Scores were based on reaction time for correct trials. The rhyme judgment test required subjects to determine whether two words, presented on a screen, rhymed or not (Baddeley & Wilson, 1985). This test was intended to measure the quality of stored phonological representations and was scored based on percentage of correct judgments.

The speech recognition in noise test was conducted again at the hearing aid fitting appointment (0 months) and again at approximately 3 month intervals (3 months and 6 months). The investigators chose to evaluate speech recognition and its relationship to cognitive tests at these intervals based on previous reports suggesting that a familiarization period of 4-9 weeks was required to reduce cognitive load (Rudner et al., 2011).

The results of the speech recognition in noise test showed, not surprisingly, that aided SRT was significantly better than unaided SRT.  The change in SRT over time was also significant, in that the SRT measured at 6 months was significantly better than at 0 months. The 3 month SRT was not significantly different from the 0 month or 6 month tests.  Age and pure-tone-average (PTA) were significantly correlated with SRT at 0, 3 and 6 month tests. At 0 and 3 months, the cognitive measures of reading span, physical matching and lexical decision were all correlated with SRT. At 6 months, only the correlations between lexical decision and reading span were significant. These results indicate that the relationship between cognitive measures and speech recognition declined over the first 6 months of hearing aid use.  Regression analysis showed a similar pattern in that reading span and PTA were significant predictors of speech recognition at 0 months, but by 6 months, only PTA was a significant predictor.

The pattern of results in this study supports the authors’ proposal that for first-time hearing aid users, working memory and cognitive processing play a more important role in speech perception in noise immediately after fitting than they do after acclimatization. Their hypothesis that stored perceptual representations are altered by long-term hearing loss and are therefore mismatched with newly amplified speech inputs is supported by their clinical observations. First-time hearing aid users typically experience amplified speech as “tinny”, “metallic”, or in some way “artificial”.  For most new hearing aid users, this perception resolves within a few weeks or so, though others may require longer periods of use to become acclimated. As time goes on, new hearing aid users usually report that speech sounds more natural and the data presented here support the assertion that stored lexical representations, after becoming distorted from long-term hearing impairment, may be adapting based on consistently restored audibility of speech sounds.

The results of this study support the importance of cognitive functioning for speech perception in noise and suggest that new hearing aid users experience increased cognitive demands for understanding speech as compared to experienced hearing aid users. It follows that individuals who have limited working memory or impaired cognition may also experience longer acclimatization periods with their new hearing aids.

Clinicians are accustomed to counseling patients to wear their hearing aids consistently; for most of the day, every day.  The authors of this study did not examine the usage patterns of their subjects with reference to their hypothesis, but future studies should investigate the potential effects of limited hearing aid use on the relationship between cognition and speech recognition in noise. If full-time use results in a more rapidly waning relationship between these variables (indicating a more rapid decrease in cognitive load required for speech recognition) it would underscore the importance of consistent hearing aid use for new users, especially those with cognitive or working memory limitations.

 

References

Baddeley, A. & Wilson, B. (1985). Phonological coding and short term memory in patients without speech. Journal of Memory and Language 24(1), 490-502.

Gatehouse, S., Naylor, G., & Elberling, C. (2003). Benefits from hearing aids in relation to the interaction between the user and the environment. International Journal of Audiology 42 (Suppl. 1), S77-S85.

Hagerman, B. & Kinnefors, C. (1995). Efficient adaptive methods for measuring speech reception threshold in quiet and in noise. Scandinavian Audiology 24 (1), 71-77.

Lunner, T. (2003). Cognitive function in relation to hearing aid use. International Journal of Audiology 42, (Suppl. 1), S49-S58.

Lunner, T., Rudner, M. & Ronnberg, J. (2009). Cognition and hearing aids. Scandinavian Journal of Psychology 50, 395-403.

McCoy, S.L., Tun, P.A. & Cox, L.C. (2005). Hearing loss and perceptual effort: downstream effects on older adults’ memory for speech. Quarterly Journal of Experimental Psychology A, 58, 22-33.

Ng, E.H.N., Classon, E., Larsby, B., Arlinger, S., Lunner, T., Rudner, M., Ronnberg, J. (2014). Dynamic relation between working memory capacity and speech recognition in noise during the first six months of hearing aid use. Trends in Hearing 18, 1-10.

Posner, M. & Mitchell, R. (1967). Chronometric analysis of classification. Psychological Review 74(5), 392-409.

Ronnberg, J., Arlinger, S., Lyxell, B. & Kinnefors, C. (1989). Visual evoked potentials: Relation to adult speechreading and cognitive function. Journal of Speech and Hearing Research 32(4), 725-735.

Ronnberg, J. (2003). Cognition in the hearing impaired and deaf as a bridge between signal and dialogue: a framework and a model. International Journal of Audiology 42 (Suppl. 1), S68-S76.

Ronnberg, J., Rudner, M. & Foo, C. (2008). Cognition counts: A working memory system for ease of language understanding (ELU). International Journal of Audiology 47 (Suppl. 2), S99-S105.

Rudner, M., Ronnberg, J. & Lunner, T. (2011). Working memory supports listening in noise for persons with hearing impairment. Journal of the American Academy of Audiology 22, 156-167.

 

These factors lead to successful hearing aid use

Hickson, L., Meyer, C., Lovelock, K., Lampert, M. & Khan, A. (2014) . Factors associated with success with hearing aids in older adults. International Journal of Audiology 53, S18-S27.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

There is a saying in real estate that the three most important factors determining property value are location, location, location.  A similar argument could be made for hearing aid fitting. Three of the most important factors in hearing aid success may be follow-up, follow-up, follow-up. Certainly, success cannot be expected without appropriate selection and verification, but thorough training, counseling and consultation after the fitting can have a huge impact on the comfort and perceived benefit of new hearing aids.

Hearing aid success can generally be defined as an outcome in which the patient wears the instruments regularly and reports benefit from them.  Knudsen et. al. (2010) reviewed several studies and a few factors emerged that were consistently related to success with hearing aids.  In general, the individuals most likely to do well were those who had positive attitudes about hearing aids prior to fitting and a greater degree of self-reported hearing difficulty (et al., 2010; Hickson et al., 1986, 1999; Cox et al., 2007).  In studies examining why people don’t use their hearing aids, the most commonly cited reasons were lack of perceived benefit and problems with the fit and comfort of the aids (McCormack & Fortnum, 2013).

A better understanding of how these factors interact will help clinicians guide their patients to become consistent, successful hearing aid users. Defining success as a combination of regular use and self-reported benefit, Hickson and her colleagues examined the association between audiological and non-audiological factors and successful hearing aid outcomes.  The audiological factors they studied were duration and degree of hearing loss, presence of tinnitus, style of hearing aid and insertion gain with hearing aids. Non-audiological factors were grouped into four categories: attitudes and cues to action, demographic characteristics, psychological factors and age-related factors.

One hundred and sixty adults over age 60 participated in the study, with a mean age of 73 years. All had hearing loss greater than 25dB HL and fewer than 2 years of experience with hearing aids. Of the 160 subjects, 75 were classified as unsuccessful hearing aid users and 85 were classified as successful users.  Unsuccessful users were defined as those who reported little or no use and/or benefit with their hearing aids.

Subjects participated in one session at which they completed audiological testing, real-ear measurements, cognitive testing, a case history and a general health questionnaire. Two weeks prior to the session they were given 8 self-report questionnaires to complete at home:

1.              Hearing Handicap Questionnaire (HAQ; Gatehouse & Noble, 2004)

2.              Self-Assessment of Communication (SAC; Schow & Nerbonne, 1982)

3.              Attitude to Hearing Aids Questionnaire (VanDenBrink, 1995)

4.              Measure of Audiologic Rehabilitation – Self-Efficacy for Hearing Aids (MARS-HA; West & Smith, 2007)

5.              Coping Strategy Indicator (CSI; Amirkhan, 1990)

6.              Locus of Control scales (Levenson, 1981; Presson et al., 1997)

7.              Auditory Lifestyle & Demand Questionnaire (ALDQ; Gatehouse et al., 1999)

8.              Social Activities Survey (SOCACT; Cruice et al., 2001)

Data analysis revealed that four factors were significantly related to hearing aid success. In order from strongest to weakest associations, these factors were positive support from others, hearing difficulties in everyday life, insertion gain (for 55dB input level) and the interaction between attitude toward hearing aids and advanced handling (e.g., identification of different components of a hearing aid and how confident the user was in manipulating the aids). Overall, hearing aid users were more likely to achieve success if they had support from friends and family, perceived greater difficulty hearing, their insertion gain matched target, they possessed a positive attitude about hearing aids and had greater confidence in their ability to use the hearing aids. Conversely, almost 25% of unsuccessful hearing aid users reported that their hearing aids didn’t help them hear better or were too noisy.  Less common responses from unsuccessful users were that they didn’t need hearing aids, had difficulty manipulating or adjusting to the aids or obtained no benefit from the aids.

The factor most strongly related to success was the support of significant others, as indicated by statements such as “The people around me think it was wise to obtain a hearing aid” or “The people around me think I hear better with my hearing aid”: underscoring the importance of involving spouses, family members or friends in the hearing aid fitting process, so that their observations and comments can be considered and discussed at the initial consultation, fitting and follow-up appointments.  Having support at the initial consultation also helps the potential hearing aid user realize their need for help. As many clinicians know, the hearing-impaired individual is often less aware of their communication difficulties than their close associates are. Friends and family members who support the need for and the observed success with hearing aids can be influential in the patient’s own motivation and perceived benefit.

Detailed discussion of test results and administration of hearing handicap questionnaires can also motivate potential hearing aid users to proceed with an evaluation and fitting. It is common for people with hearing loss to think that other people mumble or that the source of communication difficulty is external rather than related to their hearing loss. Seeing the configuration of the hearing loss, perhaps in the context of speech and familiar sounds can help them understand what they are missing. Hearing handicap questionnaires illuminate some of the familiar challenges that hearing impaired individuals experience. Clinicians are familiar with the scenario in which hearing impaired patients feel they don’t really have hearing loss because “they can still hear, they just don’t understand”.  Simply explaining the audiogram illustrates how their hearing differs from normal hearing can help them understand the implications of the loss and the need for amplification.

A positive attitude about hearing aids was related to increased use and perceived benefit. This is a harder goal to achieve, but should be addressed at the initial consultation and consistently thereafter. Every clinician has met patients who know a friend or neighbor who doesn’t like their hearing aids and it can be challenging to persuade skeptics that there is reason to expect improvement from hearing aids. It may be helpful to have testimonials from satisfied patients available on the clinic website or in written materials in the office.  I also find it helpful to simply assure people that with the quality of today’s hearing aid technology, there are very few problems that can’t be solved with thorough assessment, training and follow-up.

The issue of hearing aid stigma and negative associations is not an easy problem to overcome, but it has improved over time and will likely continue to improve. Clinicians should encourage successful hearing aid users to share their positive experiences with friends, family and co-workers, to act as advocates for the benefits of hearing aids. Similarly, friends and family of those who have experienced hearing aid success should spread the word whenever possible. The most powerful endorsements come from people who have experienced better communication with their own hearing aids. As a clinician, patients often tell me that hearing aids make their lives easier. Others tell me that they can’t imagine trying to function without their hearing aids. The more these hearing aid success stories circulate among the general public, the more motivated hearing-impaired individuals will be to pursue hearing aids for themselves.

Individuals who had greater confidence in their ability to use the hearing aids were more likely to be successful, regular users. This is another factor that can be addressed through training, guided practice, and clearly written materials. Many new patients are overwhelmed by the extent of information is covered during the hearing aid fitting; informing the patient that you have a plan for training during later visits will ease any anxiety with retaining all of the information shared at the time of the first fitting. Including friends or family members at the fitting also contributes to success as these individuals may contribute to use and care during the adjustment period.

The responses of the participants in this study illuminate many of the factors that affect hearing aid success. With an understanding of these factors and thorough follow-up care, clinicians can avoid or solve most problems and most hearing aid users should perceive benefit from their instruments. Because hearing aids are medical devices, they require comprehensive care from trained professionals. Time spent on fine tuning, training and counseling during the first few weeks after the fitting can have long-term impact on usage patterns, satisfaction and perceived benefit. Clinicians and experienced hearing aid users should share stories of positive outcomes to counterbalance negative perceptions so that new and potential users can embark upon hearing aid fittings with expectations of success.

 

References

Amirkhan, J. (1990). A factor analystically derived measure of coping: The coping strategy indicator. Journal of Personality and Social Psychology 59, 1066-1074.

Champion, V. & Skinner, C. (2008). The health belief model. In: K. Glanz, B.K. Rimer, K. Viswanath (eds.) Health Behavior and Health Education: Theory, Research and Practice. San Francisco: Jossey-Bass.

Cox, R. & Alexander, G. (1995). The abbreviated profile of hearing aid benefit. Ear and Hearing 16, 176-186.

Cox, R., Alexander, G. & Gray, G. (2007). Personality, hearing problems, and amplification characteristics: Controbituions to self-report hearing aid outcomes. Ear and Hearing 28, 141-162.

Cruice, M. (2001). Communication and quality of life in older people with aphasia and healthy older people. Ph.D. Dissertation, The University of Queensland, Australia.

Gatehouse, S., Elberling, C. & Naylor, G. (1999). Aspects of auditory ecology and psychoacoustic function as determinants of benefits from and candidature for non-linear processing in hearing aids. In: Kolding (ed.) 18th Danavox Synposium, 221-233.

Gatehouse, S. & Noble, W. (2004). The speech, spatial and qualities of hearing scale (SSQ). International Journal of Audiology 43, 85-99.

Glanz, K., Rimer, B. &National Cancer Institute – U.S. (2005). Theory at a glance a guide for health promotion practice: U.S. Department of Health and Human Services National Cancer Institute.

Hickson, L., Hamilton, L. & Orange, S. (1986). Factors associated with hearing aid use. Australian Journal of Audiology 8, 37-41.

Hickson, L. Timm, M., Worrall, L. & Bishop, K. (1999). Hearing aid fitting: Outcomes for older adults. Australian Journal of Audiology 21, 9-21.

Hickson, L., Meyer, C., Lovelock, K., Lampert, M. & Khan, A. (2014) . Factors associated with success with hearing aids in older adults. International Journal of Audiology 53, S18-S27.

Knudsen, L., Oberg, M., Nielsen, C., Naylor, G.  & Kramer, S. (2010).  Factors influencing help seeking, hearing aid uptake, hearing aid use and satisfaction with hearing aids: A review of the literature. Trends in Amplification 14, 127-154.

Levenson, H. (1981). Differentiating among internality, powerful others, and chance. In: H.M. Lefcourt (ed.) Research with the Locus of Control Construct: Assessment Methods. New York: Academic Press. pp. 15-63.

McCormack, A. & Fortnum, H. (2013).  Why do people fitted with hearing aids not wear them? International Journal of Audiology 52, 360-368.

Metselaar, M., Maat, B., Krijnen, P., Verschure, H. & Dreschler, W. (2008). Self-reported disability and handicap after hearing aid fitting and benefit of hearing aids: Comparison of fitting procedures, degree of hearing loss, experience with hearing aids and unilateral and bilateral fittings. European Archives of Otorhinolarygology, open access.

Presson, P., Clark, S. & Benassi, V. (1997). The Levenson locus of control scales: Confirmatory factor analyses and evaluation. Journal of Social Behavior and Personality 25, 93-104.

Stark, P. & Hickson, L. (2004). Outcomes of hearing aid fitting for older people with hearing impairment and their significant others. International Journal of Audiology 43, 390-398.

Schow, R. & Nerbonne, M. (1982). Communication screening profile: Use with elderly clients. Ear and Hearing 3, 135-147.

VanDenBrink, R. (1995). Attitude and illness behavior in hearing impaired elderly. Ph.D. dissertation, University of Groningen.

Ventry, I. & Weinstein, B. (1982). The hearing handicap inventory for the elderly: a new tool. Ear and Hearing 3, 128-134.

West, R. & Smith, S. (2007). Development of a hearing aid self-efficacy questionnaire. International Journal of Audiology 46, 759-771.

Cognitive Benefits of Digital Noise Reduction

Desjardins, J. & Doherty, K. (2014). The effect of hearing aid noise reduction on listening effort in hearing-impaired adults. Ear and Hearing 35 (6), 600-610.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Understanding speech in noise can be a challenge for anyone with hearing loss, but it is especially difficult for older listeners (Plomp, 1978; Duquesnoy, 1983; Dubno et al., 1984; Helfer & Freyman, 2008).  Older individuals also experience increased listening effort in noisy conditions as compared to younger listeners (Desjardins & Doherty, 2013).  Listening effort is often measured in a dual-task paradigm during which subjects perform a second task while simultaneously repeating speech in noise. Increased listening effort is reflected by the allocation of cognitive resources away from the secondary task, resulting in poorer performance. When this effect is considered with reference to everyday situations, increased listening effort could have repercussions for elderly individuals beyond communication, affecting their ability to multi-task which could have associated safety concerns.  Listening effort could also affect the risk of social isolation as elderly, hearing-impaired individuals may feel reluctant to exert the energy and effort required to interact with others in group situations.

In a review of the literature on speech recognition and cognitive abilities, Akeroyd (2008) found that hearing was the primary predictor of speech recognition performance, but working memory capacity was the second best predictor. Because speech perception is affected by peripheral auditory processing as well as cognitive functions like working memory and speed of processing, the measurement of speech recognition scores alone may not be sufficient to evaluate the potential benefits of noise reduction in hearing aids. To this end, some studies have examined the effects of noise reduction on listening effort and the allocation of cognitive resources. Ng et al. (2013) found that noise reduction improved working memory function for some subjects and Sarampalis et al. (2009) found that the use of noise reduction reduced listening effort, resulting in quicker visual reaction times and better word recall on secondary tasks.

The current study investigated the relationships among noise reduction, listening effort and speech recognition in middle-aged to older adults with hearing loss. A dual-task paradigm, with speech recognition in noise as the primary task and visual tracking as the secondary task, was used to evaluate listening effort. Working memory tends to decline with advancing age (Salthouse, 1994), as does the speed of perceptual processing (Salthouse, 1985; Wingfield et al., 1985), so measures of working memory and processing speed were also examined. Twelve subjects participated in the study, ranging in age from 50 to 74 years.  All had symmetrical, sensorineural hearing loss and were experienced hearing aid users. For the purpose of the study, subjects were fitted with behind-the-ear hearing aids and disposable canal earmolds, fitted to DSL v5 targets (Scollie et al., 2005). Hearing aids were programmed with two memories: in the first, all special signal processing features were disabled and in the second, noise reduction (Voice iQ2) was set to maximum.

Speech recognition in noise was examined using the R-SPIN Test (Bilger et al., 1984), which consists of eight lists of 50 sentences each. Half of the sentences on each list are high context and half are low-context.  The sentences were presented in two-talker babble (TTB), composed of two female talkers reciting nonsense sentences. TTB has significantly affected listening effort for older subjects in previous research (Desjardins & Doherty, 2013).  Prior to the dual task procedure, subjects performed a sentence recognition test in noise to determine the SNR required to achieve 76% performance and 50% performance.  Not surprisingly, subjects required more favorable SNRs to achieve 76% performance, with average SNRs of 4.4dB for 76% performance and 1.8dB for 50% performance. The individual SNRs were used later to derive four listening conditions for the dual task procedure:

1.              Moderate listening condition (more favorable SNR) with noise reduction

2.              Moderate listening condition without noise reduction

3.              Difficult listening condition (less favorable SNR) with noise reduction

4.              Difficult listening condition without noise reduction

Following the sentence blocks in the dual-task procedure, listeners were asked to rate their ease of listening. They were asked to rate how easy it was to listen to the sentences on a scale from 0-100, with 0 being “very, very difficult” and 100 being “very, very easy” (Geller & Margolis, 1984; Feuerstein, 1992).

Results for the R-SPIN speech recognition task revealed significantly higher scores in the moderate listening condition compared to the difficult listening condition. There was no main effect of noise reduction or any interaction between listening condition and noise reduction, indicating that noise reduction did not have a significant impact on speech recognition ability in noise. Scores for high-context sentences were significantly better than for low-context sentences, indicating that listeners used context to help them understand the sentences better. There was no interaction between context and listening condition.

Listening effort was measured with the dual-tasks of speech recognition in noise and visual tracking. Poorer performance on the visual tracking task indicated higher listening effort or the allocation of more cognitive resources toward speech recognition. Sentence recognition scores were not significantly different based on the four test conditions, but listening effort was affected by test condition.  Without noise reduction, listening effort was significantly higher for the difficult listening condition than for the moderate condition. With noise reduction, there was no significant difference in listening effort between the moderate and difficult listening conditions. In other words, noise reduction reduced listening effort in the difficult listening condition, but did not have a significant effect in the moderate condition. Perhaps surprisingly, listening effort did not vary depending on sentence context. Secondary task performance remained consistent during high and low context sentences, despite the fact that speech recognition scores were significantly better for high context sentences.

Self-perceived ease of listening ratings showed that subjects rated the moderate listening condition as significantly easier than the difficult condition. There was no significant difference in their ease of listening ratings based on noise reduction, nor was there an interaction between noise reduction and listening condition. These results indicated that the subjects did not feel that noise reduction made listening easier, despite the fact that their measured listening effort was significantly reduced with noise reduction, at least in the difficult listening condition.

There were no significant effects between working memory or processing speed on listening effort, with or without noise reduction. There was, however, a significant trend for subjects with faster processing speeds to show reduced listening effort with noise reduction activated, but this occurred only for the difficult listening condition.

The results of this study are in agreement with prior reports, in that noise reduction did not significantly improve significant speech recognition scores in noise. Noise reduction did, however, reduce listening effort in the difficult listening condition with the poorer signal-to-noise ratio. This is in agreement with Sarampalis et al. (2009) who reported listening effort reduction only in the more difficult listening condition of -6dB SNR.  Though working memory and processing speed did not significantly affect listening effort in this study, there was a trend showing subjects with faster processing were able to derive more benefit from noise reduction in the difficult listening situation. Prior studies have shown relationships between working memory, processing speed and listening effort, so this is an area that requires further study.

Desjardins and Doherty’s study provides further evidence that tests of listening effort may be a reasonable tool for evaluating the benefits of noise reduction. The authors point out that the ability of noise reduction to reduce listening effort in noisy conditions could have implications for multi-tasking, which could in turn affect safety in some scenarios.  For instance, an older hearing-impaired person, driving a car while talking to a passenger, may unwittingly divert cognitive resources toward the task of understanding conversation, thus potentially reducing their ability to respond to other stimuli related to driving. If noise reduction helps reduce the cognitive demand required for speech recognition in this scenario, in theory there would be more cognitive resources available for driving and attending to surrounding activity.

Concerns like this are particularly important when considering hearing aid fittings for older patients.  Hearing loss increases with advancing age, while working memory and processing speed are known to decline with age.  Therefore, older individuals are more likely to be challenged by the cognitive demands of multi-tasking while at the same time facing additional obstacles like poorer hearing and speech recognition ability and impaired vision.  Though the maximum noise reduction settings used in this study are not usually used in clinical fittings, it may be appropriate to use higher noise reduction settings for older hearing aid users, especially those with known cognitive processing deficits. Directional microphones and automatic noise programs with low-frequency gain reduction may provide additional benefit.  In counseling sessions, clinicians should discuss potential multi-tasking difficulties and the need to reduce distractions in order to optimize speech recognition ability. The importance of properly fitted hearing instruments to facilitate speech communication during simultaneous tasks should also be emphasized, especially at early appointments when hearing loss is diagnosed and hearing aids are being considered. This is particularly relevant to situations in which older, hearing-impaired listeners may be executing other tasks like driving, walking on stairs or through parking lots or streets, in which safety is a concern.

Social isolation is a concern for older adults with hearing loss. The National Council on Aging (www.ncoa.org) reports that hearing loss can reduce participation in social activities, reduce self-confidence, cause depression and strain relationships with family and friends. Hearing loss is also known to increase mental fatigue, making hearing-impaired listeners feel “exhausted” and fatigued from conversational interaction (Hornsby, 2013). Older adults who treat their hearing loss with hearing aids are less likely to experience these negative effects. It is plausible to assert that factors affecting listening effort could also affect the risk of social isolation, as the increased effort and fatigue required in group interactions could be too daunting for some individuals.  The beneficial effect of noise reduction on listening effort is encouraging and should be considered for hearing aid fittings with older adults at risk of social isolation and depression.

 

References

Akeroyd, M. (2008). Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. International Journal of Audiology 47 (Suppl 2), S53-S71.

Bilger, R., Nuetzel, M., Rabinowitz, W. & Rzeczkowski, C. (1984). Standardization of a test of speech perception in noise. Speech and Hearing Research 27, 32-48.

Desjardins, J. & Doherty, K. (2013). Age-related changes in listening effort for various types of masker noises. Ear and Hearing 34, 261-272.

Desjardins, J. & Doherty, K. (2014). The effect of hearing aid noise reduction on listening effort in hearing-impaired adults. Ear and Hearing 35 (6), 600-610.

Dubno, J., Dirks, D. & Morgan, D. (1984). Effects of age and mild hearing loss on speech recognition in noise. Journal of the Acoustical Society of America 76, 87-96.

Duquesnoy, J. (1983). The intelligibility of sentences in quiet and noise in aged listeners. Journal of the Acoustical Society of America 74, 1136-1144.

Feuerstein, J. (1992). Monaural versus binaural hearing: Ease of listening, word recognition and attentional effort. Ear and Hearing 13, 80-86.

Geller, D. & Margolis, R. (1984). Magnitude estimation of loudness I: Application to hearing aid selection. Journal of Speech and Hearing Research 27, 20-27.

Helfer, K. & Freyman, R. (2008).  Aging and speech on speech masking. Ear and Hearing 29, 87-98.

Hornsby, B. (2013). The effects of hearing aid use on listening effort and mental fatigue associated with sustained speech processing demands. Ear and Hearing 34(5), 523-534.

Ng, E., Rudner, M. & Lunner, T. (2013). Effects of noise and working memory capacity on memory processing of speech for hearing aid users. International Journal of Audiology 52, 433-441.

Plomp, R. (1978). Auditory handicap of hearing impairment and the limited benefit of hearing aids. Journal of the Acoustical Society of America 68, 1616-1621.

Salthouse, T. (1994). The aging of working memory. Neuropsychology 8(4), 535-543.

Salthouse, T. (1985). A Theory of Cognitive Aging. New York, NY: North-Holland.

Sarampalis, A., Kalluri, S. & Edwards, B. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech, Language and Hearing Research 52, 1230-1240.

Scollie, S., Seewald, R. & Cornelisse, L. (2005).  The desired sensation level multistage input/output algorithm. Trends in Amplification 9, 159-197.

Wingfield, A., Poon, L. & Lombardi, L. (1985). Speed of processing in normal aging: Effects of speech rate, linguistic structure and processing time. Journal of Gerontology 40, 579-585.

Woods, W., Nooraei, N. & Galster, J. (2010). Real-world listening preference for an optimized digital noise reduction algorithm. Hearing Review 17(9), 38-43.