Elon Musk is no fan of artificial intelligence — a couple years ago, he tweeted an ominous warning it could cause World War III. The late Stephen Hawking was in the same camp. In one BBC interview, he predicted AI could spell the end of the human race because it would ultimately take off on its own, redesigning itself at an ever-increasing rate that we slow-evolvers couldn’t compete with.
And that beat has gone on: A Fast Company article about robots learning languages we can’t understand. Forbes warning of AI risks like autonomous weapons, social manipulation, invasion of privacy, social grading and discrimination. In each piece, we’re encouraged to fear AI’s misuse — the stuff of dystopian nightmares. But should we?
As a technologist who helps enterprises find value by adopting IoT tools, connectivity and technology — including AI — I’m often asked: Are these nightmares grounded in reality? My answer: A little, yes. But mostly no. There’s little to fear and much to gain.
Here’s what I mean.
First, we should be clear about what we mean when talking about AI. At a high level, artificial intelligence can be divided between general AI and narrow AI. General AI is an attempt to create the kind of adaptable intelligence that we see in the movies — think HAL in 2001: A Space Odyssey. This kind of AI doesn’t really exist yet; in fact, there’s vigorous debate in the scientific community as to whether it will ever become reality. And this kind of AI is what most dystopian scenarios are built on. I’m personally not concerned about it.
Narrow AI is what we work with today: intelligent systems that have been taught — or have learned — how to carry out specific tasks without being explicitly programmed to do so. This includes narrowly defined tasks, like driving autonomous vehicles, responding to simple customer service queries or flagging fraudulent credit card transactions. When narrow AI coordinates with other narrow AI, it can do things like book hotels or flag inappropriate content online.
This kind of AI is crucial to IoT — in fact, the two are inextricably linked. If you think of an IoT network as a body, then data is its blood and AI is the organ that brings in the data, processes it — boiling billions of data points down to identify patterns — and then recirculates it to start the cycle again. In IoT-enabled industries, these data do things like power predictive maintenance or track assets. You can’t have one part of the cycle without the others.
I do think data security is a legitimate concern for narrow AI, and that appropriate privacy and security protocols are critically important. But data is what AI processes, not AI itself. And while no one can guarantee there will never be data breaches within an AI system, there are excellent guidelines for password protection and how data is encrypted, transmitted and stored.
So no, the AI we use today isn’t the bogeyman these articles would have you believe. As an industry professional, I think alarmism about new technologies simply makes for good clickbait. Emerging technologies do sometimes have unintended, adverse effects, but using AI to improve industrial performance, asset tracking and safety is just the next step in a well-understood, decades-long movement towards automation.
And looking ahead to the step after that, 5G will profoundly affect narrow AI by increasing the capacity of the internet as it expands to accommodate the 20.4 billion connected devices that Gartner predicts in the world by 2020 — with a latency of a mere 1 millisecond! This new wireless network expects to see a 90% reduction in network energy usage, extending the battery life for low-power IoT devices as much as 10 years.
In the end, AI is a tool. It’s a tool that’s helping improve industrial value and enhance consumer experiences. And, like most tech people I know, I really love having the best tools for the job. Don’t you?
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.