A sysadmin, a software developer, and an infosec researcher almost walked into a bar. Unfortunately they couldn’t agree where to go together. So they died of thirst.
Sounds familiar? When it comes to information technology, there is one thing that binds us all together: software. This article was written and published by software. You can read it by using (different) software. This doesn’t automagically create stalwart bands of adventurers fighting dragons (i.e. code vulnerabilities) and doing good deeds (i.e. not selling 0days). However it is a common ground where one can meet. Since all software has bugs, and we all use software, there’s also a common cause. Unfortunately this is where things go wrong.
Code has a life cycle. It usually starts out as a (reasonably) good idea. Without a Big Bang. Then the implementation process begins with a proof of concept (in the case of destructive software) or a prototype (which sounds a lot more friendly but may be even more cruel). Everything after this stage is your typical Software Development Cycle™. Depending on the culture of the company or organisation your code base either evolves by improvement or degenerates into a mess of patches, fixes, workarounds, and corporate Bullshit Bingo™ a.k.a. „structured quality assurance“. Success is another factor. Once the software gets widely used, there is a tendency to not change it as much as might be necessary. YMMV, of course.
Things gets really interesting when the security researchers find critical bugs. More often than not the applications breaks at the seams between components, or can be attacked by the dreaded „legacy code“ that lived in some function since the first prototype or early release. Now you can add your daily dose of hindsight, usually taken from a „Secure Coding“ workshop, and point out that all could have been avoided by proper maintenance. This is just another word for code refactoring. It is an essential part of software development. So much for the theory. Once you take a good look at the root cause of many CVE® entries, you cannot see refactoring in action. Of course this is a drastically simplified view of security-relevant bugs in code. There is a very prominent example in the form of the widely deployed OpenSSL library. The Heartbleed bug led to a refactoring revolution called LibreSSL. The OpenSSL team did the same and documented the goals in their roadmap.
Refactoring code is not easy. It is more than just fixing typos or using Yet Another Elegant Programming Language Feature™. It requires experienced developers who know what the code does, and who are very familiar with the tasks the code doesn’t do well. Every single executable comes with a free dose of hacks nobody knows about. Finding these workarounds is an effort worth spent. Fixing small issues might prevent big issues from gaining momentum.
Since the Call for Papers for DeepSec 2015 is still open, we are interested in hearing your code refactoring stories and their impact on security (i.e. we want you to break some things, but end with the princess and the prince getting together ☻). Looking forward to see your presentation on stage!
P.S.: Don’t pick on sysadmins, developers, infosec people, or everyone involved in software testing. We’re all mad here together, you know.