Over the past few years, the internet of things has exploded. Thanks to Moore’s Law, which states that the number of transistors in each chip will double about every 18 months, hardware developers have been able to fit much more functionality into the same footprint. This creates smaller computers, smaller phones and other consumer-ready electronic devices.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Everything that connects to the internet needs chips to do so, but only recently have chips become sufficiently small. This, combined with the explosion in wireless network availability, has made it easy to keep devices connected and give them remote functionalities.
That’s the summary of IoT: simple devices that can be controlled and monitored through new cost-effective chips of appropriate size. As large companies like Apple and Microsoft continue to invest heavily in the development of this technology, the question of how to build IoT becomes a question of how to manage the massive influx of data.
Oceans of information
For years, companies have relied on systems that compute and control from a relatively central location. Even cloud-based systems rely on a single set of software components that churn through data, gather results and serve them back.
The internet of things changes that dynamic. Suddenly, thousands of devices are sharing data, talking to other systems and offering control to thousands of endpoints.
This brings about new issues with data collection and analysis. Due to how these new networks share data, IoT devices are often slow, sharing bits of information without guarantees about when that data will arrive. This is particularly true in smart cities and buildings, where thousands of sensors generate data at various intervals and leave the processing up to the cloud.
As these networks evolve, they encounter new problems made possible by now-popular computing trends. Thanks to big data and smarter networks (through mesh networking, IoT and low-power networks and computing), the older systems cannot handle the influx of information they helped create.
The answer to these problems is a blend of cloud storage and edge computing. To take advantage of both technologies, however, IT professionals must understand how they operate.
The edge and the cloud
Edge computing and cloud computing are nearly opposites in the way they’re organized. Cloud computing efficiently uses a large chunk of a network to process and store information through a centralized spot — the data center where the cloud pop lives. This serves its purpose well, thanks to the tight interconnectivity of nodes sharing data with one another on a high-performance network.
With the rise of IoT, more companies want their computation capacities closer to the devices that are collecting information. Devices on IoT systems tend to be low on power and computational capability, so edge computing moves central computational power out of the cloud and closer to where the end users’ devices exist. When you work with a large number of clients, this makes processing move much more quickly.
Putting the two technologies together allows the cloud to handle general computation tasks while edge computing takes care of more client-specific needs. For example, data aggregation can rely on edge computing to aggregate data into a single set and then send it to the cloud for further processing.
By centralizing general workloads and handling more specific tasks on the edge of the network, IT professionals can improve user experiences while optimizing network and computational resources.
Use technology to get more from data
Edge computing is only now gaining popularity at telecommunications companies, but as more 5G networks become universally available, this technology will quickly become widespread. IT professionals should follow these three steps to prepare themselves and their companies for the approaching IoT data tidal wave:
1. Prepare network architecture
Today, early versions of edge computing are only being used in content delivery networks and some software-defined networks or telecommunications networks. For companies outside this niche, preparing to accommodate edge computing now will make adoption much easier in the future. Start thinking about current architecture and prepare for expanded edge capabilities.
2. Address data aggregation
Industries that currently control their edges, such as IoT networks and telecommunications companies, should already be aggregating their data as close to the edge as possible before transmitting in bundles back to central systems. Introduce queues and caches on the edge to prepare for the computational capabilities of merging and compressing data.
3. Seek out opportunities to optimize
Edge computing is all about efficient use of resources. Mapping architecture from a resource usage perspective is useful to find new ways to optimize.
Systems can compensate for increased cost of edge computing through the decreased cost of transmitting better data to the center and computing there. As edge processing matures, computational abilities at the edge will increase, providing more opportunities for prepared companies to leverage the technology.
Network and IoT-based companies can’t afford to wait until new technology arrives before beginning preparations to adopt. By following these steps, IT leaders can consider the progressions of IoT data and edge computing and prepare for their widespread arrival.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.