Thoughts about “Offensive Security Research”
Ever since information relevant for security was published, there have been discussions about how to handle this information. Many remember the full/no/responsible disclosure battles that frequently erupt. There is a new term on stage. Its name is „offensive security research“. The word „offensive“ apparently refers to the intent to attack IT systems. „Security“ marks the connection, and „research” covers anyone being too curious. This is nothing new, this is just the old discussion about disclosure in camouflage. So there should be nothing to worry about, right? Let’s look at statements from Adobe’s security chief Brad Arkin.
At a security analyst summit Mr. Arkin claimed that his goal is not to find and fix every security bug. Instead his strategy is to „drive up the cost of writing exploits“ he explained. According to his keynote speech this task becomes impossible „when researchers go public with techniques and tools to defeat mitigations, they lower that cost“. This is basically the same full/no/responsible disclosure which has been discussed in the past over and over again. While we haven not heard Mr. Arkin’s full presentation, the article quoted above leaves some important questions unanswered. How do you increase the cost of writing exploits, and what do you do about the research community that is allegedly sabotaging „countermeasures“?
The easy answer for making exploits more expensive in terms of effort is to make either writing the exploit of deploying it more difficult (or both). In an ideal world exploits have been considered and addressed at the design stage. For most software this isn’t the case since features keep piling up and code often uses components that have a different approach to secure design. This leaves your initial design vastly modified and you have to deal with it unless you ditch your code base and start from scratch for every new release. This is hardly feasible. Most of the time you will end up fixing attack vectors by adding more code or at least rearranging it.
And there is the data. Since we talk about Adobe it’s no secret that the most widely deployed tools from Adobe are the Adobe Reader and the Adobe Flash Player. Both deal with rather unpleasant data formats. PDF has a lot of capabilities and features. Plus the standard leaves sufficient ambiguities for two different PDF readers to parse the document in the same way (there is a talk from 27C3 called OMG WTF PDF which explores this matter in greater depth, here is the video link). The flash player deals with multimedia formats which are far more complex than text-based content. Multimedia data can be a promising distribution method for malicious software. Given the specifications security researchers find a fertile ground for bugs to hunt for.
Let’s turn to the offensive security research. What to do about it? Maybe we should examine how exploits come to life first. What is the basis for a successful exploit? Knowledge. Offensive code usually exploits a design flaw, a bug or any other weakness. In order to create the code you need to know the conditions it operates in. If you want to control the information needed to write exploit code, you could start with the PDF specifications. Publishing badly designed data formats or protocols can be an offensive act if you stretch your imagination slightly (this is probably the main reason why some specifications require non-disclosure agreements). Even talking about design flaws and weaknesses can be a problem. This is exactly why some results of security research go through a publication procedure, and there are many ways to do it. Depending on the impact security researchers will give vendors a varying amount of time. Regardless of the impact the benefit of disclosure remains – it enables the security community to warn others.
There is another side effect of disclosure: Published information about a vulnerability destroys the value of 0-days. Once published you can mitigate, you can load your intrusion detection sensors with rules and probably can do a lot more. It’s easy to use a phrase „offensive security research“, but it’s hard to explore its meaning. We are open for discussion and our Call for Papers is open as well.
Pingback: Security in the Trenches (or how to get dirty and stay clean) –