Get started Bring yourself up to speed with our introductory content.

Edge computing and AI: From theory to implementation

The huge coverage devoted to the topics of AI and edge computing sparked an idea when I recently visited JFK Airport. My journey coincided with a severe weather storm that disrupted travel along the East Coast. This situation illustrates how customer service agents assist passengers (at the edge) when dealing with uncertainty and changing circumstances (relying predictive analysis and intelligent decision-making under uncertainty).

In broad-brush terms, edge computing moves intensive computing for decision-making into the field (i.e., closer to the edge). It reduces the need to transfer copious amounts of data to a cloud-hosted application. It also reduces the impact of transmission delays and cost of data transport. An example might be a video processing unit that uses a CCTV feed to detect anomalies (for example, intruder sensing). Edge processing aims to extract features from a video stream and trigger an action locally (e.g., sound an alarm) and to communicate metadata to a cloud application.

In simplified terms, AI covers a wide range of activities which involve the process of analyzing data to find patterns. Common techniques include deep and/or machine learning. AI also encompasses the application of rules, some of which may be multilayered depending on the complexity of individual situations, to trigger an action.

In the case of predictive maintenance, sensor data from a machine feeds a learning system in order to detect anomalies. A change in noise frequency, for example, might point to excessive wear or an absence of lubricant. After comparing these anomalies against “healthy” behavior as predicted by a reference model or digital twin, a rule-based system would alert the machine’s operator. It might automatically schedule a repair after checking on the availability of suitable facilities in a workshop where there are qualified technicians and available spare parts.

From theory to implementation practice

While all of this sounds fine in theory, let’s consider some of the real-world implementation issues in such a scenario. Coming back to JFK Airport, think about a situation where your flight has been cancelled or rescheduled. Focus on your experience with a customer service agent at an airline’s desk. You will be speaking to an airline employee who is processing your instructions to reschedule a flight for you. This involves a lot of real-time visual and auditory processing at the edge of a wider ticketing system.

In this scenario, there is only so much an agent can do. Yes, there is a lot of edge processing to interpret your annoyance and frustration. You might get an empathetic hearing, but there’s no guarantee that you will get home in any less time. This kind of edge processing is of limited benefit when operating conditions depend on external and uncontrollable factors. In reality, this is where service providers need real value delivered to justify their investment in edge processing devices.

In practice, the agent works within a constrained set of rules based on the type of booking you hold (e.g., premium, flexible, no-changes and so forth), your status and available airline capacity. Almost certainly, he is working with partial information. How often have you heard an agent say that the airline’s booking system (in the cloud) is not showing any explanatory flight delay information?

Reasoning and decision-making under uncertainty are core AI capabilities. They need edge and centralized systems to collaborate. Sometimes, the edge agent has a degree of autonomy to make rebooking decisions, and some agents may have more autonomy than others. However, these decisions cannot occur in isolation. It’s essential to update central capacity management systems so that the airline has a clear picture of its overall capacity to optimize load-balancing decisions. This is a dynamic process which may involve multiple parallel iterations as several agents deal with many irate travelers. That’s a lot of multi-edge to cloud communication.

What this example illustrates is that single-point technologies, like AI and edge computing, deliver value in relatively straightforward use cases. These involve deterministic situations where the consequences of a problem and prescribed course of action justify the cost of dedicated hardware and software.

Airports, buildings and cities, to list a few examples, represent more complicated operating environments. These typify the applications that are more likely to appear on solution providers’ agendas. Normal operations and problem situations for these use cases depend on collaboration and frequent communications across technical and commercial boundaries. Single-point technologies are one element in a larger system.

If you are a customer, beware of simple technology recommendations for complex application scenarios. If you are a seller, have a ready response to explain how your technology fits into the architecture of an overall system. And, everybody should spare a thought for the customer service agents who are masking a great deal of underlying complexity from you.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.