The dangers and challenges of the Internet of Things – crucial points made by Bruce Schneier

Futuresagency

Bruce Schneier is one of the most brilliant thinkers when it’s about security and technology, in general. His recent piece ‘Click here to kill everyone’ is a must-read for anyone looking at the future of technology, and in particular the Internet of Things (IoT).

While preparing for my upcoming keynote at the IoT World Forum in London (May 23rd) I found his comments to be invaluable so I figured I should share some of his best morsels here (but be sure to read, print and frame the entire article!). Some high-lighting was added by me, for emphasis.

“The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster”

“Regulation might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitableWe also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized”

“We no longer have things with computers embedded in them. We have computers with things attached to them…The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things”

“You can think of the sensors as the eyes and ears of the internet. You can think of the actuators as the hands and feet of the internet. And you can think of the stuff in the middle as the brain. We are building an internet that senses, thinks, and acts….Give the internet hands and feet, and it will have the ability to punch and kick”

“The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution”

“As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens… Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore”

“Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power”

“Getting the policy right is just as important as getting the technology right because, for internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing”

“I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out… Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers”

“We need government to ensure companies follow good security practices: testing, patching, secure defaults — and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And internet-enabled devices should retain some minimal functionality if disconnected from the internet”

“That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late…We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected”

“This brings me to my final plea: We need more public-interest technologists. This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy”

“Until now, we’ve largely left the internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

 

Here is a recent illustration I am working on summarising some of the very same issues. I call this ‘a new meta-intelligence’:

Here is some related stuff that I would like to contribute to the debate:

Gerd Leonhard, CEO of the Futures Agency, believes companies chasing user information “will never want less data from us, and they will find it impossible to resist the mantra of ‘yes we can and so we will,’” describing it as a “huge issue looming right in front of us.” In his estimation, it’s an issue that will need to be addressed both on individual and regulatory levels. Read more

Read my  OPEN LETTER TO THE PARTNERSHIP ON AI (TAKE 2)

The Internet of Things and its unintended consequences: why we should proceed with caution (from my blog – in 2015;)

 

From a keynote I gave in 2014:)

Various images on this topic (all OK to use under creative commons license)

TFA Team