In the 13th century, Marco Polo set out with his father and uncle on a great voyage across uncharted territories. They traveled across the vast continent of Asia and became the first Europeans to visit the Chinese capital. For 17 years, Marco Polo explored many parts of world before finally returning to Venice. He later wrote about and mapped out his experiences, inspiring a host of new adventurers and explorers to travel to the exotic lands of the East.
We are all on a voyage similar to Marco Polo’s, navigating the uncharted ocean of IoT big data — seeking those elusive use cases. As we navigate this complex ocean of industrial IoT data, we need two things:
- Maps (industry-specific use cases)
- Meta patterns (common across industries)
These would help other “Data Marco Polos” avoid the potential minefields we have encountered.
We have abstracted and distilled common big data use cases in industrial IoT that pass the business case test. These are based on real-world projects executed across energy and heavy engineering industries in the U.S. and Japanese markets. Here are the seven core IoT big data use cases that we mapped out:
1. Creating new IoT business models
We worked with a customer that used our IIoT big data technology to restructure the pricing model of field assets based on ultra-specific usage behavior. Before adopting the IIoT analytics product, the customer had a uniform price point for each asset. Deploying the IoT analytics technology helped them transition from a uniform pricing model to executing usage-based dynamic pricing that resulted in improved profitability.
2. Minimize defects in connected plants
The client was a process manufacturing plant located in the Midwest, manufacturing electrical safety products. The quality of its electrical safety product could mean life or death for folks working in the power grid. This customer had sufficiently digitized the manufacturing process to get a continuous real-time stream of humidity, fluid viscosity and ambient temperature conditions. We used this new, rich sensor data pool to identify drivers of defect density and minimize them.
3. Data-driven field recalibration
Many assets come with default factory settings which are not recalibrated resulting in suboptimal performance. We worked with an industrial giant charged with shipping a crucial engineering asset to stabilize the power grid. These assets were constantly inserted into the network ecosystem with default parameter settings. One powerful question we asked was, “Which specific parameter settings discriminate the failed assets from the assets performing well?” Discriminant analysis revealed the parameter settings that needed to be recalibrated along with the optimal band setting. By putting this simple intervention in place, we were able to dramatically impact the number of failure events in the system.
4. Real-time visual intelligence
This is probably the most widely adopted use case, where the platform answers the simple question of “How are my assets doing right now?” This could be transformers in a power grid, oil field assets in a digital oil field context or boilers deployed in the connected plants context. The ability to have real-time “eyes” on industrial field assets streaming in timely state information is crucial. The reduced latency combined with the visual processing of out-of-condition events using geospatial and time-series constructs can be liberating for hardcore engineering industries not used to experiencing the power of real-time field intelligence.
5. Optimizing energy and fuel consumption
For many moving assets like aircraft, fleet trucks and ships, fuel cost is a significant line item in operations. Cost sensor data mashed with location data collected from mobile assets can help optimize fuel efficiency. We worked with a major fleet owner to reduce fuel consumption by 2%, which led to millions of dollars being shaved off the company’s operational expenses. The customer was able to reallocate the funds to a major project it had been putting off due to budget constraints.
6. Asset forensics
As assets become increasingly digitized, businesses can get a granular, 360-degree view of their health spanning sensor data pools, ambient conditions, maintenance events and connected assets. One can confirm an asset failure hypothesis and detect correlations from these new rich data pools. This would be much richer intelligence than the current existing processes would provide today to diagnose asset health.
7. Predicting failure
Once there is a critical mass of signals, multivariate models can be built for scoring an asset on failure probability. Once this predictive failure probability crosses a certain threshold, it can automatically trigger a proactive ticket in the maintenance system (like Maximo or other systems) for an intervention, such as replacing a part, recalibration of a machine or an examination of a machine for closer inspection. Many companies are looking towards predictive maintenance models versus time-series-based maintenance programs to be more efficient in their operations. We have a customer that was able to restructure its entire maintenance program based around real-time streaming signals from its machines. This company has been able to provide a more efficient maintenance program for its customers based on the actual performance of the equipment.
As Marcel Proust said, “The voyage of discovery is not in seeking new landscapes, but in having new eyes.”
Good luck with your IoT big data voyage!
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.