Two of the most common buzzwords you hear these days are IoT and fog computing. I’d like to put some perspective on this, coming in from the real-world experience from my current role as the CTO of FogHorn. I intend to divide this up in a series of posts, as there are many topics to cover.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
For the first post, I’d like to cover some basics and context setting.
The “T” in IoT refers to the actual devices — whether they are consumer-oriented devices, such as a wearable device, or an industrial device, such as a wind turbine. When people talk about IoT, more often than not, they may be referring to consumer IoT devices. There are a lot of technologies and applications to manage and monitor those devices/systems, and with ever-increasing compute power of mobile devices and broadband connectivity, there isn’t a whole lot of new groundbreaking technology that is needed to address the common problems there. But when it comes to industrial IoT, it is a different story. Traditionally, all the heavy and expensive equipment in the industrial sector — be it a jet engine, an oil-drilling machine, a manufacturing plant or a wind turbine — as been equipped with lots of sensors measuring various things — temperature, pressure, humidity, vibration and so forth. Some of the modern equipment also includes video and audio sensors. That data typically gets collected through SCADA systems and protocol servers (MQTT, OPC-UA or Modbus, for example) and eventually ends up in some storage system. The amount of data produced per day ranges from terabytes to petabytes depending on the type of machine. Much of that data could be noise and repetitive in nature. Until recently, that data has not been utilized or analyzed to glean any insights as to what might be going wrong if any (therefore, no predictive analytics).
The industry 4.0 initiative and digital twin concepts hover around the idea of digitizing all these assets and transporting all the data to the cloud, where analytics and machine learning can be performed to derive intelligent insights into the operations of these machines. There are several problems with this approach: lack of connectivity from the remote locations, huge bandwidth costs and, more importantly, lack of real-time insights when failures are occurring or about to occur. Edge computing or fog computing is exactly what is needed to solve this problem, bringing compute and data analysis to where the data is produced (somewhat akin to the Hadoop concept). In this article, I’m using edge and fog interchangeably; while some don’t agree with that — some people like to call the fog layer a continuum between edge and cloud — but for the purposes of this article, that difference shouldn’t matter much.
I know some of you may be thinking, “So what’s the big deal? There are mature analytics and machine learning technologies available in the market today that are used in a data center/cloud environment.” Unfortunately, those existing technologies aren’t well-suited to run in a constrained environment — low memory (< 256 MB RAM), less compute (single or dual core low-speed processors) and storage. In many cases, the technology may have to run inside a programmable logic controller (PLC) or an existing embedded system. So the need is to be able to do streaming analytics (data from each sensor is a stream, fundamentally time-series data) and machine learning (when the failure conditions can’t be expressed easily or are not known) on the real-time data flowing through the system. A typical machine or piece of equipment can have anywhere from tens of sensors to hundreds of sensors producing data at a fast rate — a data packet every few milliseconds or sometimes in microseconds. Besides, data from different types of sensors (video, audio and discrete) may need to be combined (a process typically referred to as sensor fusion) to correlate and find the right events. You also have to take into account that the H/W chipset can be either x86 based or ARM based, and typical devices (either gateways, PLCs or embedded systems) will be the size of a Raspberry Pi or smaller. Finding a technology that provides edge analytics and machine learning technology that can run in these constrained environments is critical to enabling real-time intelligence at the source, which results huge cost savings for the customer.
In my next article, I’ll talk about some of the use cases that are taking advantage of this technology and explain how the technology is evolving and rapidly finding its way into many verticals.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.