Nomad_Soul - Fotolia
SAN DIEGO -- The annual Gartner Catalyst Conference is the place to learn about the computing trends and technologies that will impact every CIO, developer, architect and end user for years to come. Gartner research vice president Kyle Hilgendorf, who specializes in cloud and Internet of Things (IoT), sat down with SearchCloudApplications for a wide-ranging interview, starting with how to harness IoT data. In part one, he discussed a cloud-first philosophy.
SeachCloudApplications: You suggested that data from an individual IoT sensor is analogous to a raindrop, but put those billions of raindrops together and you're suddenly dealing with a torrential downpour.
Kyle Hilgendorf: When you think about individual pieces of IoT sensor data flooding in -- repetitively and nonstop -- with a lot of variability in the flow of that data, there are a couple of things to consider. How are you going to store that data, and how much of that IoT data is there going to be? This is an emerging architecture that's hard to predict. You don't know how much sensor data will be flooding in and [may] not be thinking far enough ahead about how many more sensors you're going to add to that application. Building static storage environments can really slow that process down. Throwing discs at the situation may be trivial in nature, but can take a long time to provision and deploy.
You've got to deal with this torrent in real time.
Kyle Hilgendorfresearch VP at Gartner
KH: The most complex part [of IoT] is looking at the streams of data as they're coming in. The key with IoT is real-time analytics and action based on those analytics. There are times when you want to look at an aggregate of your sensor data days or weeks or months later. But, more often than not, you want to see what is actually happening right now and ask, 'what can I do right now?' So, when you are thinking about the flood of IoT data that is coming in, you have to be able to grab, and look at and analyze that data as it's flying through. You want to make either some reactive or predictive analytics based on patterns that you see. Then, you send out to the application that makes the action based on that.
The monetization of IoT sensor data is becoming increasingly important.
KH: We've talked to organizations that have collected sensor data and have used it for their own benefit. They are realizing that this is data other organizations might be interested in. We've talked to trucking and logistics companies about their trucks and drivers. They've discovered the National Highway Traffic Safety Administration is interested in that data, too. So, they've been able to monetize it by selling it to the government.
There's also the pay-as-you-go model for automobile tires instead of paying the entire purchase price upfront.
KH: It's a possibility. I hadn't thought of that, but that's the beauty of IoT -- it opens the door to so much ideation and innovation. Something that I have heard is insurance companies offering onboard sensors that you can opt into. These measure the way in which you drive. If the sensor determines over a period of time that you brake gently, don't accelerate fast, don't turn hard, along with other data points, they can start to lower your premiums. It's the advancement of a safe-driver discount. Their monetizing that in such a way that they're able to give discounts and increase the value of the insurance company, because they believe, over time, they will have fewer claims. It's a fascinating example of how, with IoT, you can take very small amounts of data and change an entire industry.
In the past, we've designed our systems to handle peak periods that waste computing resources in off-peak times, which is most of the time. How does the cloud change that?
KH: In a traditional IT architecture, no matter what application you're dealing with, they have busy times of the day, week, month or year. Think about a traditional IT application that runs from 8 a.m. to 5 p.m., and is completely silent during the middle of the night. In the morning, there's probably a spike, as people log in. We normally see a dip a lunchtime, a peak again after lunch and then at 5 p.m., it falls off a cliff. There are even applications that are more bursty and elastic than that. As an IT department, you have to ask yourself what are the resources needed for this application at the peak and build for that. The problem is for all of the other hours in the day, days in the week or months in the year, you end up having these troughs of demand though the applications are still running at their peak. You had to spend enough to deal with those rare, occasional peaks to prevent an unavailability event.
How has cloud changed that?
KH: With cloud, you have the opportunity to design for the baseline and scale up to the peak. That's the elasticity and scalability factor. The benefit is that if that arbitrary peak you thought was within your application in the old world was incorrect, and you needed to scale beyond that, it could take weeks or months to get more infrastructure in place. With the cloud, it can theoretically continue to infinitely scale even beyond your preconceived expectations. This is assuming you design your application properly to do that.
Also infrastructure related, have you looked at the possibility that the value of data diminishes over time, which could drive aggregation as data ages, reducing storage requirements?
KH: This is a little different, but as a torrent of sensor data floods in, many organizations are realizing they don't need to save everything. If you have a temperature sensor that reads once a minute and it's sending that temperature in, you may not ever care to store any data when the temperature is within variance. What I do want to know is when the temperature rises beyond a predefined threshold, which could create a potentially serious event. There are streaming services that collect all data as it comes in, and then functional computing services that look at that stream and throw away all the data until that exception condition occurs. Then, we want to log that. At that point, there might be a long-term aggregate record.
Some prefer to keep everything.
KH: We've talked to many organizations that say they're afraid to ever throw any data away, because at some unknown future point, they may need to tap into it. A couple of these organizations have said, 'we now see the benefit to the long-term pattern analysis of data over several years and we would not have been able to gain those insights had we not captured that data routinely and repetitively, and kept it all over a long period of time. There will be reasons to not keep all the data and other situations where data can become more valuable over time.
As analytics improve, you may discover trends you weren't looking for in the first place.
KH: We have heard that. An organization would go after use case 'x' and in that process uncover an incredible new insight that allows them to grow their business even further.
In the final installment of this interview, Hilgendorf discusses changes in the skill sets that developers need and the competitive landscape among the leading cloud platform providers.
Is AWS poised to succeed in the evolution of IoT?
Developing a cloud strategy for IoT devices
Security remains IoT roadblock