Often attached to a physical system, IoT devices are positioned to become part of the long-term infrastructure of the world. In the home, beyond DVRs and network-attached cameras, this includes gas and water meters. In industry this includes manufacturing control systems. And in a city this might include traffic lights and sewage control mechanisms. Given this scenario, operating systems and applications today must be designed and implemented with the deployment model of IoT systems in mind. This calls for significant research to get to market, including work focused on securing this new internet-connected mesh of devices, appliances and software.
My thoughts dovetail with a recent op-ed piece by Senator Warner, wherein he outlines goals for addressing the cybersecurity gaps highlighted by the WannaCry ransomware attack in the spring of 2017. Senator Warner’s first point — that we need technology to provide capabilities to tackle this challenge — is the real focus of this column. Following are some research objectives that industry and academia must address soon before we can begin solving the security issues with IoT.
A typical Linux or Windows-based server or laptop has an expected lifetime of three to five years, with an operating system designed with that kind of update cycle as well. Hardware refresh cycles mean that you not only get the latest advantages of speed and capacity, but you can update your software as well. While we extol the virtues of mainframes and various systems that have seen one or more decade of active service, those are discussed because they’re the exception. Furthermore, they have teams of dedicated operators who maintain them.
Contrast this with a typical IoT vision — thousands of devices embedded into the fabric of the world, taking measurements or making adjustments in response to conditions. Low-cost devices (e.g., ZigBee network devices) scattered about. If you have to staff this for maintenance, even for replacement, your costs skyrocket — a team can only manage so many devices at a time. This model won’t work for IoT.
Consider the recent WannaCry ransomware, which highlights this gap eloquently. While Microsoft had prepared and distributed the EternalBlue patch for two months, it was not fully patched and affected many systems, including IoT devices in Europe and elsewhere. The ransom message with the red background was a stark indicator of the problem, visible at some ATMs, transit stations and elsewhere.
IoT reliability over time can obviously impact security, but also safety. To manage this risk, software reliability is of paramount concern, not only in the short term, but the long term as well. These risks center on a number of aspects that require us to invest research effort. U.K. cybersecurity scientist Ross Anderson delved deep into these topics in some of his recent research.
Patching is a mess
First, the topic of reliable software updates must be tackled head on. At present, patching software incurs downtime and risks reliability. Large enterprise firms only patch their managed fleet at controlled, scheduled intervals and after rigorous testing; many of these firms invoke out-of-cycle updates only when emergency situations arise such as active exploitation. As a friend put it, not patching for every update quickly becomes a rational act, as end users and administrators estimate the risk of service disruption due to patching to be higher than of a cybersecurity incident due to those flaws. Patches are then left to be applied in bulk at sporadic intervals, when staffing and attention can be brought to bear during the inevitable disruption and downtime. A small but growing number of applications, such as Google Chrome, that patch silently in the background attempt to address this, but that number is too small at present.
On the topic of instability introduced with patching, there’s been a big push in the past five years or so in formal systems and verification of software. A formally verified system, with well-understood behaviors, can be used to detect the introduction of unspecified behaviors in the system that lead to unreliability and the associated risk. However, very little of this has focused on applying these methods to existing software, and most formal systems have been only in the academic sphere. Among the most visible impacts of a formally verified system is Amazon AWS, which uses the TLA+ theorem prover to design its systems, yielding tremendous reliability in the process. These breakthroughs help demonstrate that this is possible and the rewards can be obtained, but also illustrate how complex this process is. There is only a small amount of research working to apply these formal methods into existing software engineering; it’s easily a decade away, meaning starting sooner has real benefits.
On the topic of patching without downtime, only a little research has gone into the topic of dynamic software updates (DSU), but more must occur, and it must target existing codebases. DSUs allow a running system to be patched based on the program semantics. The Ginseng compiler, for example, attempts to do this for C code and has been applied to OpenSSH, Apache and other real-world codebases in the lab. A similar project, the Kitsune compiler, exists and may prove useful. Similarly, the Linux kernel tools Kpatch and Ksplice should be a focus of renewed, sustained efforts given the growing popularity of Linux-based IoT devices.
While IoT devices provide the most recent motivation for this challenge, the fact is that patching — the security equivalent to washing your hands during cold and flu season — remains a mess. While there are a plethora of vendors to tell you when you should patch and some that can help you track patch status, the fact remains that the complex systems shipped by application and OS vendors incur significant downtime and risk when patching. Only if this reality is addressed will frequent and timely patching become widespread.
End of life spells doom
Second, the topic of long-term ownership and maintenance must be addressed. At present, when a vendor goes out of business or marks a product end of life, end users are stuck with whatever they have at that time. Consider the case of the shutdown of the Nest Hub — now scale it to a giant with a massive installed base like GE or SIEMENS; consider autopilot systems in cars. Existing laws and regulations pose hurdles to anyone else from assuming a maintenance role outside of narrow circumstances. This topic has all sorts of questions behind it: What about liability? Given a compelling business model, someone may pick up the assets, but in the absence of a profit center why would anyone? Can device owners begin patching themselves? If so, can they obtain source code, signing keys or the like? These are all sorts of questions that need answers; expect policy debates to occur in the coming years.
The path ahead looks fraught with insurmountable hurdles; and, to be fair, it will be a challenge. But the results of recent research into these long-term software maintenance topics and prototype solutions indicate that it may be possible now. And, for the reasons outlined above, the benefits are ever more pressing and will not only help the growing IoT market, but also the existing, traditional tech market. I encourage those with the responsibility of setting the research agenda to make this their focus, in technology and in policy development.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.