Get started Bring yourself up to speed with our introductory content.

How IoT is making distributed computing cool again

IoT is making distributed computing cool again. The distributed computing lexicon has historically been relegated to conversations within the walls of military organizations, tech enterprises and the halls of academia. ARPANET technology in the 1960s begot the internet. Salesforce helped make “software as a service” a household term in 2000. Researchers have talked about distributed computing for years. Today, those distributed computing concepts will be critical to the success of internet of things initiatives. Investments like Ford Motor Company’s $182.2 million into Pivotal, a cloud-based software and services company, signal distributed computing’s migration from the halls of academia to the boardroom.

Enterprises are starting to place their bets on how they will capitalize on the significant IoT opportunities that are starting to emerge. These investments will have ramifications on a company’s ability to function and deliver the experience its customers demand. The applications that result from these multimillion dollar bets need to provide an always-on, reliant, accurate and cost-effective service. In order to do this, it will be essential that the C-suite understand the distributed computing lexicon.

If you’re not yet familiar with terms like “eventual consistency,” “vector clocks,” “immutable data,” “CRDT’s” or “active anti-entropy,” you should ask yourself the following questions to ensure you’re approaching distributed data properly. These are all terms familiar to those involved in the science of distributed systems. This two-part series will examine the answers to these questions, and help illuminate how organizations can develop cost-effective distributed architectures that ensure resiliency, usability and accuracy.

How can you architect to ensure your data is available?

The distributed world’s guiding principle is Eric Brewer’s (tenured professor of Computer Science at UC Berkeley) Consistency, high Availability and tolerance to network Partitions (CAP) Theorem. The CAP Theorem suggests that a distributed computer system can have, at most, two of those three properties. In a distributed system, availability refers to the idea of independent failure. When one or more nodes fails, the rest of the system continues to function so that the information the system processes is always available to the user. Though it predates the CAP Theorem, ARPANET is an example of a distributed system architected for availability. It was constructed to link smaller networks of computers to one another to create a larger, latticework network that researchers and scientists could access even if they were not located near a mainframe or network hub. If one of the network computers went down, researchers would still be able to access the data crisscrossing the network. Availability has been thrust to the forefront in the internet age. Highly trafficked sites such as Facebook and Amazon have favored availability over consistency. After all, it’s not as if you won’t get annoyed with Amazon if the latest product review isn’t available within subseconds. You are likely to be annoyed if you can’t log onto the site, however.

In today’s customer-centric business world, IoT initiatives are bringing back the idea of high availability and architectures built to withstand failure. A city government may choose to implement an IoT-enabled traffic grid. Each traffic light (equipped with a number of sensors) must communicate with the other traffic lights around it, smart vehicles in the vicinity and a local computing node that processes or reroutes the sensor data depending on its use. The system will likely employ a number of nodes throughout the traffic grid to collect the data and make it available to the applications. If one node fails, however, the data it collects and processes must still be available to the rest of the system and possibly to other central applications. Boardrooms typically assume their data will always be available to the application that needs that data, even in a complex distributed architecture. If they wish to implement IoT-enabled systems, they must understand those systems have to be built with failure in mind.

How do you minimize latency and performance degradation to achieve usability?

Distributed systems fight physics. A system can only move so much data before the system slows down and latency grows to an untenable point. E-commerce websites were some of the first to use distributed architectures to achieve usability. They keep product information for each item in their inventory in centralized data stores. They’ll also take the most-used portion of their product assortment — the top 25% best-selling items, for instance — and cache that information in the cloud at the edges of the network. Replicating and storing the most-accessed data in a distributed location helps keep the website transactions from overwhelming the central database and helping ensure their users get fast response times. Distributed e-commerce websites are designed with end users in mind. If the central database becomes overwhelmed and the site slows down, customers will leave before making their purchases.

Today’s IoT initiatives have adopted distributed computing concepts to ensure the data they generate and analyze remains usable, even when the data must traverse large geographic distances. Companies must also design their IoT initiatives with the end user in mind. A weather company’s sensor network generates data from each sensor. The company must analyze and send some of that data in real-time, to the weather application on users’ local mobile devices. Weather sensors take readings frequently at local sensors. It sends some of that data back to the core for analysis but must process some of the high-frequency readings near the sensor. These are the readings that look for conditions like sudden barometric pressure drops that warrant weather alerts. To ensure usability, weather companies institute a distributed infrastructure with nodes that facilitate data analysis for a cluster of sensors. They also perform edge analytics to determine which data is worth sending back for further analysis.

Your data is available and usable. Now what?

Organizations must architect their systems under the assumption of failure to achieve availability. They must architect their systems under the assumption that data analysis in one centralized location could render data unusable for distributed end users. Even if organizations are able to architect for availability and usability, other issues remain.

With so many different applications pouring data into, and pulling data from, distributed infrastructures, accuracy will be an issue. How do you know that the data you use to generate predictive insights is giving you a useful picture of the future? How do you know all of your applications are running smoothly?

The next part of this series will discuss how to architect for accuracy. And, most importantly, it will examine how to develop a distributed data system that is cost-effective. Boardrooms are making multimillion-dollar investments in today’s infrastructure tools because IoT is making distributed computing cool again; those tools must assure a strong ROI if a modern infrastructure is ever going to receive approval.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

  • Passive Python Network Mapping

    In this excerpt from chapter two of Passive Python Network Mapping, author Chet Hosmer discusses securing your devices against ...

  • Protecting Patient Information

    In this excerpt from chapter two of Protecting Patient Information, author Paul Cerrato discusses the consequences of data ...

  • Mobile Security and Privacy

    In this excerpt from chapter 11 of Mobile Security and Privacy, authors Raymond Choo and Man Ho Au discuss privacy and anonymity ...

SearchNetworking

SearchDataCenter

SearchDataManagement

Close