Starkey Research & Clinical Blog

A Digital Finger on a Warm Pulse: Wearables and the future of healthcare

 

Taken together, blood pressure, glucose and oxygenation levels, sympathetic neural activity (stress levels), skin temperature, level of exertion and geo-location provide a very informative, in-the-moment picture of physiological status and activity. All provided today by a clinical grade smart monitors used in medical research projects around the world. Subtle changes in patterns over time can provide very early warnings of many disease and dysfunctional states (see this article in the Journal Artificial Intelligence in Medicine).

It is well established that clinical outcomes are highly correlated with timely diagnosis and efficient differential diagnosis. In the not too distant future your guardian angel is a medic-AI using machine learning to individualize your precise clinical norms matched against an ever-evolving library of norms harvested from the Cloud. You never get to your first cardiac event because you take the advice of your medic-AI and make subtle (and therefore very easy) modifications to your diet and activity patterns through your life. If things do go wrong, then the paramedics arrive well before your symptoms! The improvements in quality of life and the savings in medical costs are (almost) incalculable. This is such a hot topic in research at the moment that Nature had a recent special news feature on wearable electronics.

There are, however, more direct ways in which your medic-AI can help manage your physiological status. Many chronic conditions today are dealt with using embedded drug delivery systems, but they need to be coupled with periodic hospital visits for blood tests and status examinations. Wirelessly connecting your embedded health management system (which includes an array of advanced sensors) to your medic-AI avoids all that. And in fact, the health management system can be designed to ensure that a wide range of physiological parameter remain within their normal ranges despite the occasional healthy living lapse of its host.

For me as a neuroscientist, the most exciting developments in the areas of sensor technology are in the ambulatory measurement of brain activity. Recent work in a number of research laboratories have used different ways to measure the brain activity of people listening to multiple talkers in conversations, not unlike the cocktail party scenario. What they have found is nothing short of amazing. Using relatively simple EEG recordings with scalp electrodes and the audio streams of the concurrent talkers together with rather sophisticated machine learning and decoding, these systems are able to detect which talker the listener is attending to. Some research indicates that not only the person but the spatial location can be decoded from the EEG signal and that this process is quite resistant to acoustic clutter in the environment.

This is a very profound finding as it shows how we can follow the intention of the listener in terms of how they are directing their attention and how this varies over time. This provides important information that we can use to direct the signal processing produced by the hearing aid to focus on the spatial location of the listeners and to enhance the information being processed that the listener wants to hear – effectively defining for us what is signal and what is noise when the environment is full of talkers of which only one is of interest at any particular instance in time.

Other very recent work has been demonstrating just how few EEG electrodes are needed to get robust signals for decoding once the researchers know what to look for. Furthermore, the recordings systems themselves are now sufficiently miniaturized so that these experiments can now be performed outside the laboratory while the listeners are actually engaged in real-world listening activities. One group of researchers at Oxford University actually have their listeners cycling around the campus while doing the experiments!

These developments demonstrate that the bio-sensors necessary are sufficiently mature in principal to cognitively control signal processing to produce targeted hearing enhancement. This scenario also provides a wonderful example of how the hearing instrument can share the processing load depending on the time constraints of the processing. The decoding of the EEG signals will require significant processing but this processing is not time dependent – a few 100 ms is neither here nor there – a syllable or two in the conversation. The obvious solution is that the Cloud takes the processing load and then sends the appropriate control codes back to the hearing aid either directly or via its paired smartphone. As the smartphone is also listening into the same auditory scene as the hearing aid, it can also provide another access point for sound data that could also provide additional and timelier processing capability for other more time critical elements.

But no one is going to walk around wearing an EEG cap with wires and electrodes connected to their hearing aid. A lot of sophisticated industrial design goes into a hearing aid but integrating such a set of peripheral so that they are acceptable to wear outside the laboratory could well defeat the most talented designers. So how do we take the necessary technology and incorporate it into a socially, friendly and acceptable design? We start by examining developments in the world-wide trend of wearables and examine some mid-term technologies that could well play into the market as artistic and symbols of status.

 

Sources: