Today’s IoT platforms and systems are built as or augmented to be cloud-native. The cloud provides advantages in speed to market, scalability, creative innovation and experimentation, which are business advantages when adapting to market demands. APIs make integrations straightforward and offer greater extensibility with services built into the cloud platform.
The typical approach to data collection consists of gathering all of the data from devices and sensors, uploading it to the cloud provider and processing it from a central location. This could mean via stream processing or incremental data capture. The data must travel a far distance, and as we know, this is limited by network connectivity and the speed of light. Second, from a business perspective, the provider of these services bears the cost to collect (network), store and analyze the data coming from the thing.
Today’s IoT systems are based on the capture of data and adding context. The user typically interacts with the IoT system via mobile or web applications, or by viewing reports and analytics generated from the IoT system. As the computing paradigm shifts to one of voice, virtual reality or augmented reality, the need for faster data access and more responsive interfaces will be facilitated by increasingly rapid data analysis. For example, today’s voice response systems and interconnected cloud services are already often slow or have failure responses to voice commands (Alexa, Siri, Cortana and Google Assistant). Why these fail could be the network or the complexity in all of the interconnected cloud services. Regardless of the reason, we need to decouple these devices from requiring high-quality connections and allow them to operate utilizing distributed computing resources.
As we develop new algorithms, increasing our reliance on data and connectivity, we will require additional processing and storage. Distributing this will be the only way to scale and deliver the expected user experiences and business outcomes. This distributed analysis means some processing will be done on the device, on a gateway or intermediary device, on the network edge, within a content distribution network and finally, across various cloud providers.
The future must include edge computing to increase the reliability of these services and reduce operational expenses for those operating IoT platforms. The future architecture of these technologies would likely look something like this, with processing happening at each layer of the platform:
In this design, less of the data collected per connected thing or device will make it to the right-hand side, minimizing the use of costly resources, while at the same time exploiting capabilities within the path. Gartner predicts that by 2022, as a result of digital business projects, 75% of enterprise-generated data will be created and processed outside the traditional, centralized data center or cloud, which is an increase from today’s less than 10% (“Technology Insight: Edge Computing in Support of the Internet of Things.” Published: 13 July 2017 ID: G00332393).
The next great platform shift in the post-public cloud era will be won by those who understand, facilitate and enable this transition. I am watching existing providers today that are attempting to make this change, along with innovative entrepreneurs building the next-generation application platforms. The next two to four years will likely define which companies will win the movement to the next phase of computing. Will it be Amazon, Microsoft, Google, Apple or your startup?
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.