Endangered Species: Full Disclosure in Information Security
History, fictive or real, is full of situations where doubts meet claims. Nearly every invention, every product will be eyed critically, analysed, and tested. There are even whole magazines fully dedicated to this sport, be it for example, consumer protection, reviews of computer games or the car of the year. When it comes to testing the sector of information security is particularly sensitive.
Depending on the hard- or software concerned, testing is not only about comfort or in search of a particularly good storyline, but about incidents, which can cause real damage in the real world. How should one deal with the knowledge of a design flaw affecting the security of a system?
In 1851 the American lock-smith Alfred Charles Hobbs visited the Great Exhibition in London. He was the first to pick the Chubb lock, which was on display at the crystal palace: It took him 25 minutes, without damaging the lock. Furthermore he even opened the Bramah lock, which was believed to be impregnable. This incident led to a national crisis of England’s lock engineers. Naturally some doubted Hobbs’ skills. Did he really open the lock? Was this some kind of trickery? Would these locks really prevent anybody from breaking in?
In the end Hobbs demonstrations led to a rethinking and better design of lock mechanisms. The Bank of England replaced all their Chubb locks and used other makes instead. When asked the question if it would be wise to publish vulnerabilities, Hobbs himself replied with the following sentence: “Rogues are very keen in their profession and know already much more than we can teach them.”
Responsible / Coordinated / Full / Non Disclosure
150 years later the situation has not much changed for the better. Systems still have their vulnerabilities, and clever researchers ferret them out. But how should we deal with these findings? In dealing with this question there are adherents of different schools of thought on information security. Basically there are three strategies.
The one who belief in Non Disclosure don’t want to publish any vulnerability.
At the utmost, if it need be, only under a mutual confidentiality agreement. The idea is to hide any information from potential enemies and thereby to prevent attacks.
Full Disclosure means the opposite. Knowledge about vulnerabilities is spread as soon and as wide as possible. This open approach enables affected persons to react and adapt to the new found threat.
The fair middle ground, and the third strategy, is called Responsible or Coordinated Disclosure.
First only the producer and developer get informed about a found vulnerability, and the find will only be published publicly when there’s already an update available to solve the problem. Afterwards all or at least some details about the vulnerability will be published too.
To provide no one with information is the worst way to deal with a possible threat for sure.
This leaves us with two viable – and hotly disputed – options. Most of the time producers patch errors only when they are under pressure. Bugs revealed to the public under responsible or coordinated disclosure very often already have had a long processing time (sometimes years, no kidding). A fact not acceptable to many security researchers, which has led to a lot of disputes between them and affected companies, some of them carried out in court. Sadly the ones who suffer from the current state of affairs are almost exclusively the users and customers.
Enter TPP, TTIP and TISA
WikiLeaks has published an article on “Intellectual Property” (IP) in the Trans-Pacific Partnership (short TPP). The Agreement contains some controversial points, threaten to render future research in the field of information security impossible or at least make it very risky. For example, TPP doesn’t allow the circumvention of Digital Rights Management (DRM), up to the point where even tinkering with files or devices that contain copyrighted parts, can be punished as a violation of Digital Rights, even if it does not include copyright infringement. And if there is a deliberate or commercial intention, it can be prosecuted. It does not take much imagination on the part of Rights holders, in order to fight off any effort by security companies: after all the security companies act intentional and out of commercial interest – a fact that can now be easily used by the rights owner to argue for Non Disclosure.
Under TPP it’s also possible to let all the materials and tools that have led to the violation of the “Intellectual Property Rights” be destroyed by the authorities. Just imagine all media and computer systems of an IT Security company seized and destroyed after its appearance at a trade show, just because it has reported a vulnerability.
But TPP is not only about copyright. Trade secrets get an upgrade too. The Agreement criminalizes “[Those Who Gain] unauthorized, willful access to a trade secret held in a computer system”. No matter the circumstances: Thus, if you find ways of accessing a computer system during an investigation, you should wipe the doorknob, hurry up, and just ignore it to minimize the risk of being sued. To report it is already risky. Of course, companies engaged in security checks normally receive permission from their customers to access their data, but this is not always true for researchers who test hardware and software for vulnerabilities. Which brings us back to non disclosure.
It can safely be assumed that the Transatlantic Free Trade Agreement (short TTIP) will be quite similar to TPP. And there’s also the Trade in Service Agreement (TISA) currently under negotiation. Published Leaks from the agreement include prohibitions on regulations that favour non-proprietary software: But if proprietary systems would be used exclusively in all areas, security researchers would have to commit copyright infringement automatically just to do their job. Which leads us back to the regulations already discussed above.
Free IT Security
To the benefit of all IT security researchers must be able to move freely. (Hopefully) nobody thinks to better not repair weak spots in airplanes, medical equipment, or power plants just to let sleeping dogs lie. All systems can contain errors. There is no exception. Testing for failure and proposals for the improvement of systems are vital to build, maintain, and enhance security architectures. If treaties and laws override this vital rule, we have truly cyber-apocalyptic times before us.