Dual Use Equation: Knowledge + Vulnerability = “Cyber” Nuclear Missile
We all rely on software every day, one way or another. The bytes that form the (computer) code all around us are here to stay. Mobile devices connected to networks and networked computing equipment in general is a major part of our lives now. Fortunately not all systems decide between life or death in case there is a failure. The ongoing discussion about „cyber war“, „cyber terrorism“, „cyber weapons of mass destruction“, and „cyber in general“ has reached critical levels – it has entered its way into politics. Recently the Wassenaar Arrangement proposed a regulation on the publication of exploited (previously unknown) vulnerabilities in software/hardware, the so-called „0days“. The US Department of Commerce proposed to apply export controls for 0days and malicious software. While the ban is only intended for „intrusion software“, it may be applicable for a wide variety of code and also publications about security vulnerabilities. The criteria for „intrusion software“ according to Wassenaar reads as follows:
Software ‘specially designed’ or modified to avoid detection by ‘monitoring tools,’ or to defeat ‘protective countermeasures,’ of a computer or network-capable device, and performing any of the following: (a) The extraction of data or information, from a computer or network-capable device, or the modification of system or user data; or (b) The modification of the standard execution path of a program or process in order to allow the execution of externally provided instructions.
The definition leaves a lot of room for ambiguity and speculation. The modification of system or user data may happen on a daily basis. You might even accomplish this with an office writing software. Apart from „normal“ software you definitely run into problems once you think about penetration tests. These procedures are often part of the security procedure around a network or a set of connected systems. When it comes to (information) security research you are definitely in trouble. A lot of research involves tools that break stuff. To quote from the README file of the network fuzzing tool ISIC: „ISIC may break shit, melt your network, knock out your firewall, or singe the fur off your cat.“ Definitely sounds intrusive. It may even be designed to defeat said protective countermeasures. However it is neither „cyber munitions“ nor the equivalent of Thor’s Hammer (or any other mythological wonder weapon you prefer).
The proposal harms information security research. A disclosed vulnerability can be dealt with. Disclosing information about bugs in software and hardware is a way to disarm a digital weapon (which is a very bad metaphor!) and to close yet another door for „intrusion software“. In addition bans or export controls create closed circles within the countries of origin where „dangerous code“ will be discussed and developed. Given the angle you look at, the proposal of the US Department of Commerce can also be seen as a way to develop domestic 0days and force researchers to keep these attack tools in the country – for whatever reason your imagination can come up with. Setting aside the speculation, the proposal is too broad to be useful in the current technological stage. Instead it is essentially a ban of all IT security products and tools. It harms defenders more than it deters attackers form using 0days on your infrastructure.
We strongly agree with Halvar Flake and recommend his blog article on the matter. It is aptly titled Why changes to Wassenaar make oppression and surveillance easier, not harder. Everyone dealing with security research doesn’t need new policies to make one’s life harder. Comparing code with weapons doesn’t help either. Providing a safe legal basis for security research is the way to go, especially with the proliferation of Internet-connected devices just around the corner.