A recent article from Business Insider related a story about a temperature sensor in a casino fish tank being used as an entry point for attackers to steal customer information.
The story also quoted a former director at Britian’s GCHQ as saying, in regard to IoT devices, “there’ll likely need to be regulation for minimum security standards because the market isn’t going to correct itself.”
I think he’s half right: the market isn’t going to correct itself. It’s going to continue to release buggy, flawed software and hardware.
Unfortunately, I don’t see regulation having much of a chance at present, at least in the United States.
When it comes to regulation, you can try to set rules for two groups: one, the device manufacturers/developers; and two, the IT teams that operate and protect the networks on which these devices run.
Right now in the United States, it’s very difficult to hold manufacturers or developers accountable for security defects in their products.
As this article in the New Republic explains, there are several reasons for this, including software license agreements that are structured to significantly limit a vendor’s liability.
Customers, even very large ones, don’t have a lot of recourse to get a software license that’s more favorable to the licensee, because a software sale isn’t worth the potential liability that a vendor might take on if it does accept some responsibility for security vulnerabilities or defects.
And even when users or customers sue under consumer protection laws, “…courts tend to treat certain user security expectations as inherently unreasonable,” as the New Republic article notes.
That is, given the complexity of software and IT systems, and all the myriad ways a system could be breached, courts have generally found that it’s unreasonable to hold a developer or vendor liable for security issues. (Note: The article I’ve referenced is from 2013, so if there are more recent examples of a successful suit against a vendor or developer regarding security defects, I would love to hear about it.)
It’s On You
Thus, most attempts at imposing rules or security standards have been aimed at the organizations that buy and use software. Since we can’t compel developers to create secure software and hardware, and then hold them accountable when they don’t, we try to compel organizations that use those products to manage the risks themselves.
For example, PCI DSS outlines a set of rules and requirements for organizations that accept and process credit and debit cards. Mechanisms are in place to assess whether organizations comply with those rules and requirements.
However, there’s little evidence that our existing crop of regulations and compliance standards have materially reduced the number of breaches or their scope. Creating a new set of rules for IoT, or extending current standards to include IoT, probably won’t help.
How To Shift The Burden? Grievous Bodily Injury And Death
At present, developers and manufacturers have essentially shifted much, if not all, of the risk of running software onto operators. In turn, operators are buckling under the weight of all those risks.
However, as software moves closer to human interaction in the physical world—for example, self-driving cars—some of that risk might start to shift back onto developers.
I think it’s inevitable that a software glitch or a security vulnerability in an autonomous vehicle will cause grievous bodily harm or death.
Injury and death tend to get the attention of consumers, elected representatives, and insurance companies, which in turn may generate sufficient pressure to upend current license agreements and extend consumer protection laws to more robustly address software defects.
From there, it’s possible that such changes might filter into IT systems and software, including IoT devices.
But it’s going to be an ugly, expensive, and drawn-out process. So in the meantime, if you’ve got IoT devices on your network, look into best practices and prepare for problems, because right now it’s all on you.