[From the last episode: We saw how IoTThe Internet of Things. A broad term covering many different applications where "things" are interconnected through the internet. technology was helping shippers and insurance companies to figure out who’s responsible for container breaches.]
OK, so we’ve been talking a lot about securityRefers to whether or not IoT devices or data are protected from unauthorized viewers. and how it works. But that’s of no use if IoT device makers don’t use that technology and build it into their devices. What’s to be done about companies that don’t make secure devices?
This gets us into dicey territory: regulation vs. the free market. Ideally, no one would buy insecure devices, and the companies trying to sell them would learn, by going out of business, that you have to make secure devices for people to buy them. But there are a number of problems with this:
- It assumes that everyone will know whether a device is secure or not.
- It assumes that all the marketing and sales material will be truthful.
- It assumes that, if security costs a bit more, that people will gladly pay for it.
- There could also be an underlying sentiment that says, “If you’re stupid enough to buy an insecure device and you get hackedThis can mean a couple things. A quick-and-dirty (but not elegant) trick to get something done is a hack. A computer security break-in is also a hack (because inelegant tricks are used to break in). It can be a noun or a verb ("he hacked my computer")., well, it’s your own dang fault.” But your insecure device could put others at risk by acting as a launching point for attacks.
So if the free market (which isn’t always free – if you can buy things only through Amazon, for instance, then that’s not a market) can’t do this by itself, then must the government step in to protect us?
One Step Before Regulation
Perhaps. We’ll see the difficulty with that in an upcoming post. But in the UK, they’re giving the market a chance to do better before regulating. They’ve put together a report that lays out guidelines and expectations about what the market should do. They explicitly state that they’re going to watch things and that, if this doesn’t work, then the next step would be to codify these principles into law. That hasn’t happened yet (as far as I know).
They’ve based the recommendations on five principles:
- Reducing the burden of security on consumers (and other companies involved in manufacturing the devices). In other words, device makers shouldn’t be able to shrug their shoulders and blame the user or manufacturers for a security lapse.
- Being “transparent.” That means that device documentation or packaging should be clear about what specific security measures a device has. (That assumes that it will be truthful, of course…)
- Making security measurable. In other words, finding ways to quantify just how secure a device is.
- “Facilitating dialogue.” This is about creating an environment where companies can share their knowledge of threats and best practicesA term referring to ways of doing things (business, technology, etc.) that an industry generally views as the best way to do things. Best practices take time to establish, and they usually relate to basic principles, leaving lots of options on how to do specific things. with each other so that the industry as a whole improves.
- Being resilient. This means that, if something does go wrong, that there’s a plan in place to handle it. That might include having a device reverting to some “safe” stateA broad term that can apply to a lot of systems. Let’s say you have a system that can be one of two ways – on or off. That “on-ness” or “off-ness” is the state of the system. The system is either in an on state or an off state. Many different kinds of systems can have many different kinds of state. There are many complex branches of engineering or physics or even mathematics that deal with state. The easiest way to think of any of them is, “the way things are” (which may depend on the history of prior states, meaning how they got to this particular state)., and it might include having a reporting mechanism in place.
A Specific To-Do List
They wind up with 13 specific points in a “code of conduct” (and the following are direct quotes):
- No default passwords.
- Implement a vulnerability disclosure policy.
- Keep softwareIn this context, "software" refers to functions in an IoT device that are implemented by running instructions through some kind of processor. It's distinct from "hardware," where functions are built into a silicon chip or some other component. updated.
- Securely store credentials and security-sensitive data.
- Communicate securely.
- Minimize exposed attack surfacesIndustry jargon for a way to break into a device or network or anything.. (“Attack surface” is a fancy phrase for “ways to break into a device.”)
- Ensure software integrity.
- Ensure that personal data is protected.
- Make systemsThis is a very generic term for any collection of components that, all together, can do something. Systems can be built from subsystems. Examples are your cell phone; your computer; the radio in your car; anything that seems like a "whole." resilient to outages.
- Monitor system telemetryMeasuring information about communications. For the IoT, it could be things like how and when things are communicated, how much is communicated, etc. data.
- Make it easy for consumers to delete personal data.
- Make installation and maintenance of devices easy.
- Validate input data.
Valid Input?
That last one deserves a bit more explanation. Back when I was in engineering school, there were a few iron-clad rules about writing software. One of them was that you always check input data before doing anything with it. If the user entered nonsense (or at least not what you expected), then you reject it.
I’ve seen that rule both over- and underused on the internet. Many websites use databasesA structured way of storing data and relating different pieces of data to each other. (Like, which address belongs to which person.) There are “query” languages, the best-known of which is SQL, that let you enter data into the database, change data that’s already in the database, and retrieve data from the database., and there are languages that let you get data from the database. Such languages should never be accessible directly from a website; the website should use that only internally. But there have been hacks where someone literally puts a database query into an input form, and the website blithely goes and executes that query, providing data or access that was never intended. In other words, no one validated the input (expecting, for example, a first name).
On the flipside, how many times have you entered a valid phone number of the form, say, “(xxx) xxx-xxxx”, only to have the website state, “Please enter a valid phone number” without telling you what it expects as valid? Here they’re over-restricting and rejecting valid formats that they didn’t make the effort to accept.
Of course, these days, many websites have solved this the lazy way: just don’t format, and put in the numbers only. This violates another rule that used to hold: let people be people, and have the computer do the work of transforming human-understandable data into computer-understandable data. In this case, the programmers have punted on that policy and forced users to enter the phone number not the way people write them, but the way the computer stores them. Bad form. (But no one will do anything about it in all likelihood.)
Leave a Reply