In Informed Dreaming, we explored the impact of the rate of scientific discovery and technology change on research in general and on hearing aid research in particular. From here will begin to look more closely at how some of that change will manifest itself in the everyday technologies of tomorrow. So let’s précis that roadmap.
There are two main technological forces in this story – computing power and connectivity. These are quite literally the backbone from which many other profoundly influential players will derive their power. If there was only one dominant idea, it would be ubiquitous computing – a term coined by the brilliant computer scientist Mark Weiser in 1991 in his influential Scientific America article “The Computer for the 21st Century.” As head of computer science at Palo Alto Research Center in Palo Alto, he envisaged a future where our world was inextricably interwoven with a technological fabric of networked “smart” devices. Such a network has the capability to manage our environments from the macro down to a detailed, individualized level – everything from the power grid to the time and temperature of that morning latte.
But these devices are also inputs to the system – detectors and sensors feeding a huge river of information into the central core, or cloud as we now know it. Many of these are already worn by people (mobile phones, smart watches, activity monitors etc. all uploading to the cloud) and the sophistication and bio-monitoring capability of these wearables is increasing by the week. Moreover, many of these sensors are stationary but have highly detailed knowledge about their transactions – cashless transactions record the person the time the place and the goods, tapping on and off public transport, taking a taxi, an Uber, a flight, a Facebook post, street closed-circuit television security systems, your IP address, cookies and the browser trail, etc.
Notwithstanding the issues of privacy (if indeed that still exists), this provides an inkling of the data flowing into the cloud – no doubt only the very tip of this ginatic iceberg. Big Data is here and it is here to stay, and although Google is King, these particular information technologies are but babies.
I was fortunate enough to attend the World Wide Web conference in 1998 where Tim Berners-Lee, the man who invented the World Wide Web while working at CERN in 1989, began promoting the idea of the Semantic Web – a means by which machines can efficiently communicate data between each other. In the ensuing years, much work has gone into developing the standards and implementing the systems. In that time however, two other massive developments have also occurred that may overshadow or subsume these efforts: On the one hand – natural language processing has matured using both text and audio in the forms of Siri, Google Talk and Cortana to mention just a few. On the other hand, driven by huge strides in cognitive neuroscience, processing power and advanced machine learning, we are witnessing a rebirth of Artificial Intelligence (AI) and the promise of so-called Super Intelligence.
So just how can we design listening (hearables) technologies, hearing aids in particular, that can capitalize on these profound developments? Well, let’s take a sneak peek at what a future hearing aid might look like in this brave new world.
Imagine a hearing aid that can listen to the world around the wearer and break down that complex auditory scene into the key relevant pieces – sorting the meaningful from the clutter. A hearing aid that can also listen into the brain activity of the listener and identify the wearer’s focus of attention and enhance the information from that source as it is coded by the brain. A hearing aid that in fact, is not a hearing aid but a device that people wear all the time as a communication channel for other people and machines, for their entertainment, as a brain and body monitor that also maps their passage through space. Such a device provides support in adverse listening conditions to the normally hearing and the hearing impaired alike – it simply changes modes of operation as appropriate.
Possibly the most surprising thing about this scenario is that, in advanced research laboratories around the world (including Starkey Research), the technologies that would enable such a device exist RIGHT NOW. Of course they are not developed to provide the level of sophisticated sensing and control that are required to give life to this vision, nor are they in a form that people can put in their ears. But they do exist and if we have learned anything from watching the progress of science and technology over the last few decades, their emergence as the Universal Hearable Version 1.0 will likely happen even sooner than we might sensibly predict from where we now stand.