Artificial intelligence is slowly but steadily embedding itself into the core processes of multiple industries and changing the industrial landscape in so many ways — be it deep learning-powered autonomous cars or bot-powered medical diagnostic processes. The industrial and energy sectors are not immune to the disruption that comes with embracing AI. As upstream and downstream companies gear up for AI, there is one important lesson I want to share that might seem counterintuitive. For the successful execution of an AI project, the data matters more than the algorithm. Seems odd, right?
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Let me start by sharing a recent experience. Flutura was working with a leading heavy equipment manufacturer based in Houston that has numerous industrial assets deployed on rigs globally. These rotary assets were quite densely instrumented; they have great digital fabric consisting of pressure sensors, flow meters, temperature sensors and rpm sensors all continuously streaming data to a centralized data lake. The problem the manufacturer was trying to solve was how to “see” typically unseen early warning signals of failure modes in order to reduce multimillion-dollar downtimes.
In order to do this, every time a piece of upstream equipment went down, we needed to label the reason why it went down. It might have been motor overheating, bearing failures or low lube oil pressure, but until we know the specific reason why equipment goes down, it’s difficult to extract the sequence of anomalies leading to the failure modes. While this company had a massive sensor data lake, running into terabytes, the information was useless until the failure labels were embedded within the assets’ timeline. In order to tag all “failure mode” label blind spots, we configured an app that helped institutionalize this process. Every time a maintenance ticket was generated for unplanned equipment downtime, the app would step through a workflow at the end of which the failure mode for the asset was tagged onto the timeline.
So, here are three questions to ask your team before you embark on an AI project:
- Top three failures: Which are the top three high-value failure modes that are most economically significant?
Rationale: All failure modes are not the same. Isolating and prioritizing the vital few failure modes from the significant many saves money.
- Tagging process: When equipment goes down, is the failure mode automatically generated by the asset or does it need a “human in the loop” to tag failures?
Rationale: Some machines are programmed to record the failure mode event as a historian tag, others need an external process.
- Breadth and depth: What is the breadth and depth of equipment data available in the data lake?
Rationale: In order to model the entire set of data, one needs to have maintenance tickets, sensor streams and ambient context. In order to “see” sufficient instances of a failure, the sensor data lake needs to have at least one to two years of operational data.
To conclude, it’s easy to get carried away by the hype surrounding AI and algorithms. But the key to winning the game is finding the answer to the above three data-tagging questions. Good luck as you introduce AI to unlock gold in your data.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.