We’re used to debates about the ethics of computer security research, but lately this meta-issue has changed into questioning its lawfulness.
Earlier in the year, there was concern surrounding aspects of the obscure Wassenaar Arrangement, which is centred on important controls over the export of arms to certain countries deemed inappropriate. Now, in addition to weapon systems such as tanks and guided missiles, this Arrangement also covers some types of software used to break into computer networks, as used by security professionals.
With some apparent ‘cybercrime’ attacks seemingly forms of ‘cyberwarfare’, it was inevitable that some tools were at risk of being controlled. While most of us agree and support the importance of blocking supplies of weapons to countries which are likely to misuse them, Wassenaar is too murky by nature, and better at stopping shiploads of tanks than a few megabytes of software.
In the last few days, a legislative bill has been drafted for the US Senate to make it illegal to hack into a vehicle, with the threat of fines of up to $100,000 for each vehicle hacked. There is also no exemption to permit the hacking of a vehicle owned by the hacker.
This follows press reports of the hacking of a Jeep Cherokee in July, precipitating the recall of over a million vehicles to address the vulnerability.
Instead of allowing security professionals to test the security of automotive computer systems, the bill proposes a council to oversee and review submissions from the manufacturers, and to develop ‘best practices’ for the industry.
As this council will be composed of a majority of representatives of the motor manufacturing industry, and will take up to a year to form, it will mix the metaphors of the horse having bolted before a door has been put on the stable, and the students examining their own work. It is hard to think of a more ineffective solution.
This flies in the face of long computer industry experience. Apple, Microsoft, Adobe, and every other software developer, is heavily reliant on the work of security researchers or hackers: for example, of the 37 security issues which were fixed in Apple’s release of OS X 10.11.1, 28 (76%) had been discovered and detailed by security researchers outside Apple.
If security researchers are progressively banned from using network penetration tools on the grounds that they are ‘cyberweapons’, and from trying to hack into motor vehicles, then that will not improve security: it will foster weaker security, with vulnerabilities exploited by attackers before they can be fixed. Like it or not, security researchers play a vital role in improving security for us all.
On the other hand, I doubt that anyone would be happy to fly in a plane with control systems which they knew had been penetrated by a security researcher; I suspect the same goes for vehicles.
Vehicles like cars, buses, and trucks may not be as physically vulnerable as aircraft, but their safety systems are of critical importance. Even issues of their emissions, performance, and fuel economy can become of great concern, as Volkswagen’s problems have shown.
If unrestricted research is not acceptable, and no research is no better, the only option remains some form of regulation, which inevitably needs to be conducted by an independent body, such as the respective national safety board. Security researchers might consider such regulation anathema, but if properly administered it should assure the public and those who travel that their safety is still being protected.
I also believe that these meta-issues go far beyond hacking of cars. In the Internet of Things (IoT), there are many systems, such as medical devices, which are every bit as sensitive as vehicles, and need a similar approach. They also have independent supervisory bodies which would be ideal for the oversight of research.
Indeed given the possible consequences of hacking even an iKettle – water heaters without water are a common cause of fires – these meta-issues could be even more general.
Once again, though, we are wrestling with security problems too late in the day, after they have already arisen. These should surely have been addressed by manufacturers and legislators before anyone started selling potentially vulnerable products. Perhaps the legal principle should be caveat venditor rather than caveat emptor?
(Caveat venditor being Latin for ‘let the seller beware’, rather than the general legal principle of caveat emptor, ‘let the buyer beware’.)