ne of the most dynamic and exciting developments in information and communications technology is the advent of the Internet of Things (IoT). Although networking technologies have become increasingly ubiquitous over the past two decades, until recently they have largely been restricted to connecting traditional end-user devices, such as mainframes, desktop and laptop computers, and, more recently, smartphones and tablets.
Recent years have witnessed the attachment of a much broader range of devices to the network. These have included vehicles, household appliances, medical devices, electric meters and controls, street lights, traffic controls, smart TVs and digital assistants such as Amazon Alexa and Google Home. Industry analysts estimate that there are currently more than eight billion such devices connected to the network and project that this number will expand to more than 25 billion by 2020. The increasing deployment of these devices has enabled new use cases for network technologies. Some experts project that the IoT may generate as much as US$13 trillion in revenue by 2025.
Industry analysts estimate that there are currently more than eight billion such devices connected to the network and project that this number will expand to more than 25 billion by 2020.
Unlike traditional cyber systems, which connect general-purpose computers, IoT systems often link together highly specialized devices designed for specific purposes with only a limited degree of programmability and customizability. In addition, IoT systems often store and process data in a distributed manner, in contrast to the highly centralized approach of consolidating storage and computing power in large data centres. In addition, IoT systems are sometimes called cyber-physical systems, because unlike purely cyber systems, they also include sensors that collect data from the physical world.
The distributed nature and the presence of physical sensors create both new opportunities and vulnerabilities from the standpoint of security and privacy. To date, however, the industry, end-users and the academic community have only just begun to appreciate what the burgeoning deployment of this technology might mean and to study how to prepare for the challenges posed by this new technological environment.
One of the IoT’s most distinctive aspects is the increasingly personal nature of the information collected. Connecting vehicles to the network means that others can track those vehicles’ movements and the manner in which they are operated. The use of smart devices in homes can reveal a great deal of information about residents’ habits and the ways that they live their lives. Attaching medical devices to the network can yield an immense amount of sensitive information about people’s health care. Combining multiple sources of data together and running predictive analytics on the resulting data can allow interested parties to infer surprisingly detailed levels of personal information about those using IoT devices. Interestingly, a survey of US consumers indicated that they are the most concerned about the sharing of information that reveals their personal habits (Rainie and Duggan 2016).1
Another difference between IoT systems and traditional systems is the frequency with which data is stored and processed locally. The fact that many IoT systems have little tolerance for latency often means that they handle many of the data-related functions in the local device instead of transmitting all data to a central location, such as a data centre.
Storing and processing data on a distributed basis has both advantages and disadvantages. The absence of a single large repository of multiple users’ data eliminates the presence of a large tempting target with a single attack surface that can draw the attention of cyber attackers. At the same time, decentralized storage raises the possibility that some locations will not consistently maintain the appropriate levels of security hygiene. Instead of relying on a single, hardened point protected by a small cadre of highly trained security professionals, distributed storage and processing rely on the diligence of individual users to maintain the integrity of the system.
In addition, the lack of centralized control means that any system architect must take into account the fact that the incentives of different actors connected to the system will necessarily vary. Although decentralized decision making often leads to outcomes that maximize the benefits to the system as a whole, that is not always the case. Under certain circumstances, it may be in the selfish best interest of one actor to submit erroneous data into the system in order to try to obtain greater benefits or to bear fewer costs. Even if every actor were to submit accurate information, individual actors may find it advantageous to deviate from their expected response to that data. As a result, IoT systems need some way to ensure the provenance and accuracy of data and to police whether decentralized decision makers are acting in ways that are consistent with the proper functioning of the overall system.
Everyone who has used the internet is well aware of the onslaught of cyber attacks that bombard computers nearly every day. Viruses, worms, trojans, botnets and other forms of malware have become all-too-familiar parts of the online experience, as are persistent efforts to hack through security.
The fact that IoT systems necessarily incorporate sensors that collect data from the physical world subjects them to an entirely new vector of attack. In addition to the range of traditional online threats, flooding a sensor with electromagnetic radiation can cause it to malfunction. Even worse, a more sophisticated attacker can send carefully calibrated erroneous information to the sensor that can cause the system to take actions that are not warranted by the actual situation. For example, something as simple as spoofing location data can cause a connected car to veer far off course.
The fact that IoT devices are both partially programmable and connected to the network raises the possibility that bad actors may attempt to commandeer them or cause them to malfunction. The reality is that most IoT systems were not designed with security in mind. Video repositories such as YouTube contain numerous videos showing how sophisticated actors can use laptops to take over the driving functions of cars. The trade press abounds with stories where malicious operatives have subverted smart refrigerators, televisions, baby monitors and digital assistants. Perhaps most problematically, many medical devices have no security built into them at all. Many stories document the ease with which hackers can stop critical devices such as pacemakers and insulin pumps.
One can easily conceive of situations that would go beyond mere interference and extend to even more dire situations. The phenomenon of ransomware suggests that an adverse actor could use these capabilities to engage in extortion or worse.
The existence of these potential threats underscores the need for the IoT industry and the academic community to develop solutions to these problems. Under a recent US National Science Foundation grant, a number of colleagues and I have designed a variety of strategies to address these problems.2
For example, the redundancy inherent in the distributed nature of the IoT can guard against cyber attacks, including zero-day attacks that have never been seen before. Utilizing an emerging approach known as accountability, IoT systems can assign a number of the other nodes to recheck the calculations of each node periodically. If a majority of the other nodes assigned to rerun the calculation come to a different result, the node being checked is declared to be in fault and isolated from the system.
Another technique known as state estimation can protect against sensor attacks. This approach takes the early experiences with a particular environment to estimate the reasonable range of possible values that a sensor might report. If the system receives data from the sensor that falls outside that range, it can flag that sensor for additional scrutiny or even go so far as to isolate it from the system.
The reality is that most IoT systems were not designed with security in mind.
With respect to privacy, a scheme known as differential privacy can prevent particular data from being attributed to any specific person in situations when individual data points are combined and reported as an aggregate value, such as a mean, by adding a predefined range of random noise to each data point. If the number of observations being aggregated is large enough, the central limit theorem of statistical analysis dictates that the randomness of the noise will tend to cancel itself out. This key concept of probability theory means that the data associated with different individuals can be obfuscated without materially degrading the quality of the information being sought. However, the resulting mean is more properly regarded as a distribution than as a true value. So long as the designers know how much variation the problem on which they are working can tolerate, they can calibrate the system in a way that preserves anonymity without compromising system performance.
What is perhaps most striking about each of these potential solutions is that none is perfect. Consider the approach reflected in accountability. If all of the nodes assigned to rerun the calculations of the node being checked are themselves compromised, they will come to the same erroneous answer and thus will fail to identify the fact that the node being checked has been corrupted. These errors can be reduced by assigning more nodes to rerun the calculations or by rerunning the calculations more frequently, but these solutions are costly and still will not completely eliminate the possibility that an attack may escape detection.
Similarly, state estimation only provides a probabilistic indication of integrity. It is possible that an attack might yield values that fall within the range predicted by state estimation or might be successfully spoofed during the initial calibration phase so that the system believes that erroneous data is actually accurate.
The limits of these solutions underscore the fact that no amount of diligence can completely eliminate the security and privacy risks confronting IoT systems. Indeed, system designers could spend their entire development budgets on improving security, in which case they would have no money left to develop product features, and their system would still not be entirely secure. This means that the proper design of privacy and security of the IoT must be conceived as a trade-off that attempts to strike the proper balance between functionality and security.
The limited nature of security also dictates that the quest for perfect protection represents something of a unicorn hunt. Although designers should attempt to protect their systems as well as possible, the impossibility of perfect protection dictates that they should also plan for the inevitable failures by employing a layered security approach that supplements border protections with mechanisms designed to achieve fast detection and remediation of problems as they occur.
The need to optimize multiple concerns also necessarily implies that the solution will not turn solely on the available technical solutions. Instead, the ultimate balance will depend on economic and legal considerations as well. For example, policy makers must decide whether to rely on tort law, which involves ex post compensation for wrongful harms suffered, or regulation, which focuses on ex ante prevention of harms.
With respect to tort law, whether product liability will stop short of holding IoT device manufacturers to a perfection standard may depend on how many other courts follow the lead of many US and Canadian courts and adopt the risk-utility standard. This standard explicitly frames the analysis in terms of the costs and benefits of different designs.
Regulation will likely follow the existing sector-specific agency structure, which will assign responsibility for different types of IoT to different agencies. This division of authority risks yielding inconsistent outcomes and relying on IoT expertise spread thinly across multiple agencies.
A central question regarding privacy regulation will turn on whether it will follow the sector-specific approach followed in the United States and some Canadian provinces, or the omnibus privacy regulation embraced by the federal government in Canada and in Europe.
In addition, several standard-setting organizations (SSOs) are vying for leadership in IoT standards. The burgeoning significance of the IoT heightens the importance of the governance structures that determine how these SSOs will make decisions.
Perhaps most importantly, the primary goal should not be to remediate problems that have occurred, but rather to create high-powered economic incentives to avoid them in the first place. That means that any legal and regulatory interventions must seek to align incentives with good outcomes and should reflect the likely reactions to any policies.
Rainie, Lee and Maeve Duggan. 2016. Privacy and Information Sharing. Washington, DC: Pew Research Center. .
The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.