In The Fabric of Tomorrow, I spoke briefly about Mark Weiser’s influential article in Scientific America where he coined the term “ubiquitous computing.” As with many great ideas, this has a long and illustrious lineage and indeed has continued to evolve. Alan Turing wanted his computers to communicate with each other as well as humans (1950); Marshall McLuhan (1964) identified electric media as the means by which “all previous technologies – including cities – will be transformed into information systems” and in 1966 the computing pioneer Karl Steinbuch declared that “In a few decades time computers will be interwoven into almost every industrial product”. Of course, things really got going when DARPA invested in ARPAnet (1969) and TCP/IP was implemented in the early 1970’s (see http://postscapes.com/internet-of-things-history for a great timeline).
Weiser points out that “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” This disappearing act demonstrates how technology has seamlessly become an essential part of our everyday lives. Devices have become part of the process of engaging in particular activities. We would miss them if they were gone – as anyone knows when they are separated from their smartphone! But when present, they are invisible.
Mark Weiser’s particular goals at Xerox’s Palo Alto Research Center (PARC) were about augmenting human interaction and problem solving, and he conceived three classes of smart devices. (i) Tabs – wearable (inch) sized devices such smart badges etc; (ii) Pads hand held devices (feet) the size of a writing pad and (iii) Yard sized devices for interaction and display (e.g. smart boards). These are all macro devices and since his initial ideas others have incorporated device classes on sub millimetre scales. These include (iv) Dust– mm and sub mm sized micro-electro mechanical systems (MEMS) and Smart Dust which are minute wirelessly enabled sensors; (v) Skin – fabrics based on light emitting polymers and flexible organic devices such as OLED displays and (vi) Clay – ensembles of MEMS devices that can form configurable three dimensional shapes and act as so called tangible interfaces that people can interact with (see https://en.wikipedia.org/wiki/Hiroshi_Ishii_(computer_scientist)). Critically, these latter classes of devices usher in new ways of thinking about the interactions between devices and users and the environment. The early thinking was a straightforward reflection of the existing tools for interaction and collaboration, but the latter classes take this thinking down paths untraveled – no doubt some will be blind alleys but others could add motifs and methods that have yet to be conceived.
The term “Internet of Things” (IoT) has been attributed to Kevin Ashton (1999) who had been looking at the ways in which RFID tags, together with the Internet, could be used for advanced supply chain management. Here we see the focus on the sensor and the identity of that which is sensed: This begins to fill out our analogy of the peripheral nervous system of the Cloud. But more importantly, is also begins to inform how we might exploit these ideas in the development of the next generation of hearing technologies. For instance, in Starkey Research we have a project that combines the listening and analytical capabilities of a smart phone to analyse a particular acoustic environment and also to record via Bluetooth, the hearing aid settings selected by the user in that environment. By uploading that information to the Cloud, we can then “crowd source” user preferences for different environment classifications thereby enabling better adaptive pre-sets and controls.
The wireless connection of the smart phone to the hearing instrument is only the first step along the road enabled by the IoT. The hearing aid is connected not just to the phone but to anything the phone can connect too, including the Cloud. In the example above, the hearing instrument off-loads the processing around the environmental classification to the phone, which in turn uploads the data to the Cloud. It is the offline analysis of the data amassed from a wide range of users that provides the associations between environments and the settings, that is, the knowledge that can then inform our designs. On the other hand, there is no reason, in principle, why the data in the Cloud can’t also be used to modify, on the fly, the processing in the hearing instrument.
The point is that, under the new paradigm, the hearing aid is no longer an isolated instrument that is set and forget. It can be updated and modified on the fly using machine level or human interaction or a combination of the two. The user, the health professional, the manufacturer, the crowd can all be leveraged to increase the performance of the instrument. The instrument itself becomes a source of data that can be used to optimize its own operation, or in aggregate with the crowd, the operation of classes of instruments.
The capacity to optimize the instrument at the level of the individual will be dependent, in part, on the nature and quality of the data it can provide.
Read Informed Dreaming here.