DeepSec 2014 Opening – Would you like to know more?

DeepSec 2014 is open. Right now we start the two tracks with all the presentations found in our schedule. It was hard to find a selection, because we received a lot of submissions with top quality content. We hope that the talks you attend give you some new perspectives, fresh information, and new ideas how to protect your data better.

Every DeepSec has its own motto. For 2014 we settled for a quote from the science-fiction film Starship Troopers. The question Would you like to know more? is found in the news sections portrayed in the film. It captures the need to know about vulnerabilities and how to mitigate their impact on your data and infrastructure. Of course, we want to know more! This is why we gather at conferences and talk to each other. We are especially proud to welcome friends and projects that attended DeepSec in the past and return with the results of lively discussions.

Of course, we could also have selected the only good bug is a dead bug for this year’s conference, but we believe this motto should be every day’s motto.

Enjoy DeepSec 2014!

BIOS-based Hypervisor Threats

The DeepSec 2014 schedule features a presentation about (hidden) hypervisors in server BIOS environments. The research is based on a Russian analysis of a Malicious BIOS Loaded Hypervisor (conducted between 2007 and 2010) and studies published by the University of Michigan in 2005/2006 as well as 2012/2013. The latter publications discuss the capabilities of a Virtual-Machine Based Rootkits and Intelligent Platform Management Interface (IPMI) / Baseboard Management Controller (BMC) vulnerabilities. Out-of-band management is sensitive to attacks when not properly protected. In the case of IPMI and BMC the management components also play a role on the system itself since they can access the server hardware, being capable to control system resources.

Combining out-of-band components with a hypervisor offers ways to watch any operating system running on the server hardware. Or worse. It’s definitely something you can do without. The researcher investigated the published information and found indications of increased execution times of code running on different hardware. The talk will explain the set-up, the hardware being used, and will introduce a test framework enabling researcher to test (server) hardware for anomalies.

The complete research will be published after the talk in a comprehensive article describing the work. We highly recommend attending the presentation.

DeepSec 2014 Talk: Why IT Security Is ████ed Up And What We Can Do About It

Given the many colourful vulnerabilities published (with or without logo) and attacks seen in the past 12 months, one wonders if IT Security works at all. Of course, 100% of all statistics are fake, and only looking at the things that went wrong gives a biased impression. So what’s ████ed up with IT Security? Are we on course? Can we improve? Is it still possible to defend the IT infrastructure?

Stefan Schumacher, director of the Magdeburger Institut für Sicherheitsforschung (MIS), will tell you what is wrong with information security and what you (or we) can do about it. He writes about his presentation in his own words:

Science is awesome. You aren’t doing science in infosec. Why not? Seems to be the overriding message of @0xKaishakunin #AusCERT2014

This was one tweet about my talk of security in a post-NSA age at the AusCert conference in Australia this year. It pretty much sums up my opinion about what is currently going on in the IT Security circus.

Why IT security is ████ed up certainly is a strong stance against what is going on at IT security in general and conferences like DeepSec in particular. However, for the last three to four decades modern IT security exists, we have come a long way in securing our machines, processes and networks. However, certain fields of IT security are thoroughly ignored in research and practical application.

This has to do with computer science being the primary science behind IT security. Computer science is the child of mathematics as a formal science and engineering sciences. This limits the scientific methods to those used in that fields.

Unfortunately, IT security is more than just mathemathics and engineering. Neither social engineering nor human behaviour can be explained with CS methods. Nor can it be combated with it. The same goes for political/policy problems, like intelligence services attacking our human rights in the digital space of living. This is a political problem and we need a political solution for it. So political science also plays a role in IT Security.

When we keep this in mind, we see that current IT security lacks further development in certain fields. So I propose to emancipate IT security research from Computer Science and turn it into a new field of science. We can use the methods and tools of CS, Maths and engineering, but also need the methods, tools and philosophies (!) of humanities and social sciences like psychology and pedagogy.

So lets go and create a new Science. It will be fun and games until theories of science clash. ;-)

New Article for the DeepSec Proceedings Publication

In cooperation with the Magdeburger Institut für Sicherheitsforschung (MIS) we publish selected articles covering topics of past DeepSec conferences. The publication offers an in-depth description which extend the conference presentation and includes a follow-up with updated information. Latest addition is Marco Lancini’s article titled Social Authentication: Vulnerabilities, Mitigations, and Redesign.

High-value services have introduced two-factor authentication to prevent adversaries from compromising accounts using stolen credentials. Facebook has recently released a two-factor authentication mechanism, referred to as Social Authentication (SA). We designed and implemented an automated system able to break the SA, to demonstrate the feasibility of carrying out large-scale attacks against social authentication with minimal effort on behalf of an attacker. We then revisited the SA concept and propose reSA, a two-factor authentication scheme that can be easily solved by humans but is robust against face-recognition software.

The MIS web site has a collection of all published articles. The full articles will be found in the special edition „In Depth Security – Proceedings of the DeepSec Conferences“.

DeepSec 2014 Talk: The IPv6 Snort Plugin

The deployment of the new Internet Protocol Version 6 (IPv6) is gathering momentum. A lot of applications now have IPv6 capabilities. This includes security software. Routers and firewall systems were first, now there are also plugins and filters available for intrusion detection software such as Snort. Martin Schütte will present the IPv6 Snort Plugin at DeepSec 2014. We have asked him to give us an overview of what to expect.

  • Please tell us the top 5 facts about your talk!
    • Main research for my talk was done in 2011. I am quite surprised (and a little bit frightened) by how little the field of IPv6 security has developed since then.
    • It is often easier to build attack tools than to defend against them. But to improve IPv6 network security we urgently need more detection and defence tools.
    • The Snort IPv6 plugin is my approach to strengthen network security. It uses just a few building blocks to add new detection techniques to an old and established framework.
    • The software project is a product of my diploma thesis, unfortunately I had to abandon it afterwards. So if anyone is interested in it and could help with further development they are more than welcome.
    • It used to be difficult to compile the software but now I took the time to build a Debian package. I will publish that at DeepSec.
  • How did you come up with it? Was there something like an initial spark that set your mind on it?
    We started with the question ‘Why is IPv6 adoption’ so slow?’ One hypothesis was that there was a lack of sufficiently advanced network and security monitoring tools. Nobody wants to operate a network without any estimation on its activity and security implications. So I selected an IDS as a good way to approach these security issues. An IDS cannot solve all problems, but in many cases just making the issues and activity visible is already a big step ahead.
  • Why do you think this is an important topic?
    IPv6 is inevitable and we have to deal with it. As a protocol stack it has lots of problems of its own, and the whole v4 to v6 transition adds a second layer of problems on top of that. – But in the medium-term (say for the next decade) it is the only viable solution to the current IP address shortage.
  • Is there something you want everybody to know – some good advice for our readers maybe? Except for “come to my talk” ?
    Advice to anyone in network security: Ask your vendors about IPv6 operations and security functions! Too many people (even equipment providers) still hope IPv6 will not affect them and they end up with dysfunctional and insecure products.
  • A prediction for the future – what’s next? What do you think will be the next innovations or future downfalls – for IT-Security in general and / or particularly in your field of expertise?
    For IPv6 security: there are some more protocol layers to analyze, especially multicast comes to mind. Another very interesting and highly relevant topic are security issues caused by IPv4/IPv6 interaction and routing. So far we know of routing loop attacks against ISATAP, 6to4, and Teredo (documented in RFC 6324); in the future I would expect more of these directed against common IPv6/IPv4 tunnelling and transition configurations.

Martin’s presentation is one of the IPv6 talks we offer at DeepSec 2014. We recommend all IPv6 talks and the IPv6 workshop for anyone dealing with networks, either passively or actively.

DeepSec 2014 Talk: Build Yourself a Risk Assessment Tool

„The only advice I might give to everyone who is responsible for information security is that it is never about a tool or a methodology“, says Vlado Luknar. The never-ending quest for the “best” tool or methodology is a futile exercise. In the end it is you, the security specialist, who adds the most value to a risk assessment (RA) / threat modelling process for your company, claims Vlado Luknar (Orange Slovensko a.s. / France Telecom Orange Group).  In his talk at DeepSec Mr. Luknar will demonstrate that it is quite easy to capture your overall security knowledge in a home-made, free-of-charge tool.  But first, let’s ask Mr. Luknar a couple of questions:

1) Mr. Luknar, please tell us the top 5 facts about your talk!

  1. There is no problem with understanding existing RA methodologies, yet it is really not that easy to start with any of them.
  2. There is no single best approach to RA for everyone.
  3. For a RA to be practical we need to simplify things as much as we can.
  4. The presentation is for those practitioners who are subject of hefty compliance requirements which all demand a formal risk assessment.
  5. Exaggerating a little we could say the best about RA is not the result but the journey itself.

 

2) How did you come up with it? Was there something like an initial spark that set your mind on the topic of your talk?

One of key disappointments for me, as a (naive) practitioner, was the fact that no methodology would discover for me something I didn’t have a chance to know about before we started the RA journey. And I don’t mean a forgotten piece of sensitive data, or a server which we discovered when trying to solve the R(asset) = T x V x I formula.

I mean, the real discovery: after you went through all the exercises, responded to all questions, calculated everything that could be calculated, and after you finally pushed that red button labelled START on your mysterious RA machine… The machine then makes few cranky sounds, coughs a couple of times and then finally spits out the ominous verdict:

YOU’VE GOT A BIG RISK OF CLASSIFIED DATA BEING STOLEN, ABUSED OR DISCLOSED!

This is it? Well, yes. Nothing more nothing less.

Then I realized that performing a risk assessment is about the best collective judgment you can make from facts you are able to collect. Only very later on I discovered a very similar statement in the NIST SP 800-39 Managing Information Security Risk.

 

3) Why do you think this is an important topic?

Despite all that scholars know about risk it remains a vague and somewhat confusing concept. Everybody talks about it, asks for it, but only few know how to go about it, in particular those who really depend on it every day. And then there are those who don’t know that they should depend on it and that it should be an organic part of any security management and not a lifeless requirement from a standard. Done properly it can save you a lot of trouble, done formally you just cheat on yourself.

 

4) Is there something you want everybody to know – some good advice for our readers maybe? Except for “come to my talk” ;)

The only advice I might give to everyone who is responsible for information security is that it is never about a tool or a methodology.

It is you, the well informed internal expert and the team around you, who add the real value to the process, method or the tool. The tool or the methodology is just a  facilitator, although an important one.

 

5) A prediction for the future – what’s next? What do you think will be the next innovations or future downfalls  – for IT-Security in general and / or particularly in your field of expertise?

Some industries have already experienced it, not always handling it properly – that is, the growing pressure of regulation, and the open, public comparison of products based on security. One of the major “conflicts of interests” is that of technological advances versus privacy issues. To me the security is only another attribute of quality (in cases where the product does not directly depend on it) and due to many, mostly economic reasons, it does not yet make it there. The conflict of privacy vs. technology should inevitably make security a native part of any functional and design specifications during standard SDLC. It is not happening yet, especially with traditional business moving to web: just look at companies who provide GPS monitoring or smart home management – how many of them use even SSL and something more than a password on a web page. But it will change very soon.

 

DeepSec 2014 Talk: Cloud-based Data Validation Patterns… We need a new Approach!

Data validation threats (e.g. sensitive data, injection attacks) account for the vast majority of security issues in any system, including cloud-based systems. Current methodology in nearly every organisation is to create data validation gates. But when an organisation implements a cloud-based strategy, these security-quality gates may inadvertently become bypassed or suppressed. Everyone relying on these filters should know how they can fail and what it means to your flow of data.

Geoffrey Hill has been in the IT industry since 1990, when he developed and sold a C++ application to measure risk in the commodities markets in New York City. He was recently employed by Cigital Inc., a company that specializes in incorporating secure engineering development frameworks into the software development life-cycles of client organizations.  He was leading the software security initiative at a major phone manufacturer and a major central European bank over the course of the last three years.

Currently Geoffrey’s starting up his own security consulting company called Artis-Secure. It is focused on making security development frameworks better integrated with business processes.
As for hobbies apart information security: he’s currently planning a massive fancy-dress gathering next year in an Irish castle. Social engineers, beware! And between all of this he was so kind to answer some questions about the talk his going to give at our upcoming conference…

1. Please tell us  the top 5 facts about your talk.


a.       The contents will be very useful for enterprise and cloud projects.

b.      I will show how the data validation problem is getting increasingly complex.

c.       My proposed design uses current technologies.

d.      I will describe a validation methodology that is language and process-agnostic.

e.      My talk outlines a lightweight and simple solution.

 

2. How did you come up with it? was there something like an initial spark that set your mind on it?
I have been frustrated by the lack of coherent and concrete validation patterns in my previous projects. I needed to think of a simple way to sanitize and constrain unknown inputs, given the complexities of multiple languages, exceptions and character sets.  My talk came out of this.

 

3. Why do you think this is an important topic?
Data validation threats (e.g. sensitive data, injection attacks) account for the vast majority of security issues in any system, including cloud-based systems. However, the current approach for validation patterns needs to be revisited and simplified or there will be no adoption in the developer community.

 

4. Is there something you want everybody to know – some good advice for our readers maybe? except for “come to my talk”. ;)
Good security patterns are very useful in a fast moving development environment because they can be easily deployed with minimal disruption. My talk is aimed at fellow security professionals who can use this information.

 

5. A prediction for the future –  what do you think will be the next innovations or future downfalls when it comes to cloud based strategies particularly your field of expertise. Is the cloud here to stay? what’s next?
“Each time history repeats itself, the price goes up”. The ‘Internet of Things’ (IoT) will be driven by new devices and cloud-based operations, making for incredibly complex meta-systems. This complexity will bring with it new security challenges. I believe that many costly mistakes could be made with this next advance in IT. The key to properly addressing these challenges with fewer mistakes is to implement simple models that are based on well-known security patterns. I see the next innovative wave of security as creating and providing standard libraries of these design patterns.

 

 

DeepSec 2014 Talk: Safer Six – IPv6 Security in a Nutshell

The Internet Protocol Version 6 (IPv6) is the successor to the currently main IP Version 4 (IPv4). IPv6 was designed to address the need for more addresses and for a better routing of packets in a world filled with billions of networks and addresses alike. Once you decide to develop a new protocol, you have the chance to avoid all the mistakes of the past. You can even design security features from the start. That’s the theory. In practice IPv6 has had its fair share of security problems. There has been a lot of research, several vulnerabilities have been discussed at various security conferences. DeepSec 2014 features a presentation called Safer Six – IPv6 Security in a Nutshell held by Johanna Ullrich of SBA Research, a research centre for information security based in Vienna. She answers questions about the content of the talk and the ongoing research in IPv6 security.

  • Please tell us the top 5 facts about your talk!
    IPv6 is the successor of nowadays IPv4 protocol and overcomes address depletion due to offering 2^128 distinct addresses. However, the protocol lacks security and privacy and vulnerabilities are found in the novel extension headers, neighbour and multicast listener discovery or tunnelling. Analysing them, I infer three major challenges with respect to IPv6: First, all of today’s address formats have at least one serious shortcoming and effort is required for the development of a secure while maintainable addressing system. Second, security on the local network practically does not go beyond IPv4’s although a number of approaches have been presented. Last but not least, reconnaissance is still an advantageous aspect in networking and appropriate techniques have to be developed.
  • How did you come up with it? Was there something like an initial spark that set your mind on IPv6?
    Writing my master thesis on the compression of secure communication in powerline systems, I encountered IPv6 for the first time. Starting at SBA Research afterwards, I was able to devote my first six months to intensive IPv6 studies including standards, scientific publications and community boards. I realized that an in-depth knowledge of the protocol requires a lot of time and people could benefit by providing this knowledge in a nutshell.
  • Why do you think this is an important topic?
    IP is THE Internet Protocol and the Internet a vital part of almost everybody’s life. So, I doubt that anybody will be able to go round IP’s new version 6. Is this single reason enough to convince you?
  • Is there something you want everybody to know – some good advice for our readers maybe? Except for “come to my talk”. :)
    Don’t condemn IPv6, but neither praise it to the skies. It is just another protocol having its advantages and disadvantages.
  • A prediction for the future – what’s next? What do you think will be the next innovations or future downfalls – for IT-Security in general and / or particularly in your field of expertise?
    I am worried of today’s “Yes-we-can”-mentality of bringing everything online — your coffee machine, your car or automation systems or the smart grid. These systems have been developed being stand-alone, connecting them to the Internet in some way does violate their primary specification and may induce serious security risks. Even worse, are the threats induced by a vulnerability: While in traditional IT this might result in non-availability and economic loss, this may expand to life-threatening situations, e.g., in an automation system or your car.

Despite the fact that most of the Internet still uses IPv4, don’t forget that IPv6 is widely available by packet tunnels. Modern operating systems have built-in IPv6 connectivity by these tunnels, so the problems discussed in this presentation are not something you have to deal with in the far away future. Therefore we recommend Johanna’s talk for everyone using the Internet.

In addition we wish to point out that DeepSec 2014 also features an in-depth IPv6 security workshop titled IPv6 Attacks and Defenses – A Hands-on Workshop held by Enno Rey of ERNW GmbH.

DeepSec 2014 Workshop: Hacking Web Applications – Case Studies of Award-Winning Bugs

The World Wide Web has spread vastly since the 1990s. Web technology has developed a lot of methods, and the modern web site of today has little in common with the early static HTML shop windows. The Web can do more. A lot of applications can be accessed by web browsers, because it is easier in terms of having a client available on most platforms. Of course, sometimes things go wrong, bugs bite, and you might find your web application and its data exposed to the wrong hands. This is where you and your trainer Dawid Czagan come in. We offer you a Web Application Hacking training at DeepSec 2014.

Have you ever thought of hacking web applications for fun and profit? How about playing with authentic, award-winning bugs identified in some of the greatest companies? If that sounds like fun, then you should definitely join this workshop! Dawid will discuss bugs that he has found together with Michał Bentkowski in a number of bug bounty programs (including Google, Yahoo, Mozilla and others). You will learn how bug hunters think and how to hunt for bugs effectively. To be successful in bug hunting, you need to go beyond automated scanners. If you are not afraid of going into detail and doing manual/semi-automated analysis, then this workshop is for you.
You will be given a VMware image with a specially prepared environment to play with the bugs. What’s more, after the workshop is over, you are free to take it home and hack again, at whatever pace is best for you. To get the most of this workshop basic knowledge of web application security is needed. You should also have ever used a proxy, such as Burp, or similar, to analyse or modify the traffic.

Dawid is an experienced security researcher who has found security vulnerabilities in products by Google, Yahoo!, Mozilla, Microsoft, Twitter, BlackBerry and other companies. He will lead you through case studies of high profile and high impact flaws in the fabric of software exposed via HTTP(S). Cryptography doesn’t help you if the web logic behaves faulty. The training will show you how web application work, how they can be analysed, and how critical bugs look like. All you need is your laptop and a way to run the provided virtual images.

DeepSec 2014 Workshop: Understanding x86-64 Assembly for Reverse Engineering and Exploits

Assembly language is still a vital tool for software projects. While you can do a lot much easier with all the high level languages, the most successful exploits still use carefully designed opcodes. It’s basically just bytes that run on your CPU. The trick is to get the code into position, and there are lots of ways to do this. In case you are interested, we can recommend the training at DeepSec held by Xeno Kovah, Lead InfoSec Engineer at The MITRE Corporation.

Why should you be interested in assembly language? Well, doing reverse engineering and developing exploits is not all you can do with this knowledge. Inspecting code (or data that can be used to transport code in disguise) is part of information security. Everyone accepts a set of data from the outside world. Most commonly organisations, individuals, and companies consume web pages or receive email. As soon as you deal with filters, you need to worry about code hidden in data. You can get fancy and run an intrusion detection system, too. If you do, then you are in the business of dealing with opcodes – provided you look for exploits in the wild.

The training at DeepSec is especially interesting for penetration testers and everyone involved in defence. Analysing malicious software is a good example that combines defence and reverse engineering with assembly language. You really miss a lot of the things attackers try to tell you, if you don’t speak x86_64! The information gained will also help you to recognise and mitigate attacks. Plus it’s not as hard as you think. Despite x86 assembly having hundreds of special purpose instructions, you will be shown that it is possible to read most programs by knowing only around 20-30 instructions and their variations.

Don’t miss this training! It’s a rare occasion. Take advantage of it!

Once you register don’t forget to bring your laptop, a (Microsoft Visual C++ Express 2012) compiler and a way to run the provided Linux VM.

RandomPic XSA-108

What a couple of Infosec people thought about XSA-108.

cats-empty-plate 2-loop

Apparently some were a little bit disappointed that XSA-108 affects “only” HVM. Sorry, not another catastrophy, not another heartbleed, Shellshock or something in this class. Only a vulnerability which potentially allows access to other VMs.

Anyway, time for an update!

(Idea shamelessly stolen from aloria)

DeepSec 2014 Workshop: Suricata Intrusion Detection/Prevention Training

Getting to know what’s going on is a primary goal of information security. There is even a name for it: intrusion detection. And there are tools to do this. That’s the easy part. Once you have decided you want intrusion detection or intrusion prevention, the implementation part becomes a lot more difficult. Well, if you need help with this issue, there is a two-day workshop for you at DeepSec 2014 – the Suricata Training Event.

Suricata is a high performance Network Intrusion Detection System (IDS), Intrusion Prevention System (IPS) and Network Security Monitoring engine. It can serve pretty much all your needs. It’s Open Source (so it cannot be bought and removed from the market) and owned by a very active community. Suricata is managed by the non-profit foundation; the Open Information Security Foundation (OISF). OISF’s mission is to remain on the leading edge of open source IDS/IPS development to meet the ongoing needs of the community.
The two-day training event is held by core developers of Suricata. This means you get all the information on how intrusion detection works, how the rules can be created and adapted to your needs straight from the experts. Attending the workshop will give you not only a greater proficiency in Suricata’s core technology but will also have the unique opportunity to bring questions, challenges, and new ideas directly to Suricata’s developers. You will get the theory plus hands-on exercises with live packets and detection signatures. A sample of topics that will be covered over the course of the 2-day training include:

  • Compiling, installing, and configuring Suricata
  • Performance factors, rules and rule sets
  • Capture methods and performance
  • Event / data outputs and capture hardware
  • Troubleshooting common problems
  • Advanced tuning
  • Integration with other tools

If you own or use a network, then you should definitely be interested in IDS/IPS – get the packets before the packets get you! The workshop is tailored for developers, technologists, and security professionals. Even if you are new to Suricata or IDS in general, the training is a perfect starting point to get familiar with the topic. Make sure to book early, the number of tickets for all workshops is limited!

DeepSec 2014 Talk: A Myth or Reality – BIOS-based Hypervisor Threat

Backdoors are devious. Usually you have to look for them since someone has hidden or „forgotten“ them. Plus backdoors are very fashionable these days. You should definitely get one or more. Software is (very) easy to inspect for any rear entrances. Even if you don’t have access to the source code, you can deconstruct the bytes and eventually look for suspicious parts of the code. When it comes to hardware, things might get complicated. Accessing code stored in hardware can be complex. Besides it isn’t always clear which one of the little black chips holds the real code you are looking for. Since all of our devices we use every days runs on little black chips (the colour doesn’t matter, really), everyone with trust issues should make sure that control of these devices is in the right hands – or bytes.

DeepSec 2014 features a talk where BIOS-based hypervisors are discussed. There is ongoing research on this topic. Getting control of a computer’s BIOS or any other part of crucial firmware allows adversaries to either control or at least watch the code a machine is running. Imagine your firmware having an extra layer of virtualisation technology. Usually this is undesirable, especially if an unknown third party is accessing this layer. We asked a well-known Information Security Specialist to present the state of the research regarding these extra features in hardware. The talk will also introduce means to detect hidden hypervisors in firmware and give examples where these modifications were found in the wild. Once attackers go to extra sneaky on the stealth scale, you don’t always get the luxury of detecting backchannel activity. Aliens don’t always phone home, unfortunately.

Everyone using black boxes and computer chips should attend this talk. We know that avoiding unknown firmware and chip designs is hard (hence the term hardware), but you should pay attention to unauthorised modifications of these components. This is crucial if you use the hardware for your own (or someone else’s) infrastructure.

Keep in mind: The materials of this presentation have not been published before, and the research covers a period of more than a year. Drop by and raise your paranoia level! ☺

Back from 44CON – Conference Impressions

If you haven’t been at 44CON last week, you missed a lot of good presentations. Plus you haven’t been around great speakers, an excellent crew, “gin o’clock” each day, wonderful audience, and great coffee from ANTIPØDE (where you should go when in London and in desperate need of good coffee).

Everyone occasionally using wireless connections (regardless if Wi-Fi or mobile phone networks) should watch the talks on GreedyBTS and the improvements of doing Wi-Fi penetration testing by using fake alternative access points. GreedyBTS is a base transceiver station (BTS) enabling 2G/2.5G attacks by impersonating a BTS. Hacker Fantastic explained the theoretical background and demonstrated what a BTS-in-the-middle can do to Internet traffic of mobile phones. Intercepting and re-routing text messages and voice calls can be done, too. Implementing the detection of fake base stations is now a very good idea. Some specialised phones do this already. Recently an Austrian research team has published work on detecting interception equipment. Unless you use additional security mechanisms you should take a look at these technologies. Doesn’t hurt if you about it any way.
Hacking Wi-Fi got a serious boost by a presentation from Dominic White. The title was Manna from Heaven; Improving the state of wireless rogue AP attacks, and it showed the state of affairs of modern Wi-Fi hardware. Vendors have tried to defend against the attacks of the past. Especially when it comes to stripping SSL things such as the hard-coded root certificates in Google’s web browser make the life of a pen-tester hard. The updated toolbox called mana will help you to deal with modern Wi-Fi clients. Everyone using wireless communication should now about the risks involved. When in doubt, always remember that connecting to a wireless network acts as a strong form of exposure.

Conan Dooley talked about the challenges of running a network infrastructure at a hacker conference. Once you deal with very talented and creative people, then off-the-shelf solutions might not be the way to go. He offered very useful insights into the operation and gave helpful hints for anyone having to deal with a similar challenge. If it works for a hacker con, it will probably do some good for your enterprise network.

Joxean Koret demonstrated how to break the detection of malware by anti-virus. His examples shed light on the quality of the software. A lot of anti-virus products disable the protection mechanisms of the operating system in order to perform tests. This can lead to exposing the system to attack code in the worst case (paradoxically enabling malicious code to exploit the anti-virus software to gain a foothold). Again anti-virus filters aren’t the magical solution to malware entering your network. Joxean did a very good job showing this, and we recommend looking at the examples he gave in his presentation.

Speaking of incidents, you should think about them in advance. Steve Armstrong spoke about beginner’s incident handling mistakes and how to avoid them. Investigating the trails of attackers and throwing them out of your network/hosts is a task that relies on a lot more than technology. Step by step Steve explained the core failures and concepts. In addition he presented a tool called CyberCPR which enables response teams to collaborate and securely exchange information about a case. It’s still in its beta stage, but we suggest to give it a try.

We definitely look forward to attend 44CON next year! See you 8-11 September 2015 in London!

DeepSec 2014 Talk: Why Anti-Virus Software fails

Filtering inbound and outbound data is most certainly a part of your information security infrastructure. A prominent component are anti-virus content filters. Your desktop clients probably have one. Your emails will be first read by these filters. While techniques like this have been around for a long time, they regularly draw criticism. According to some opinions the concept of anti-virus is dead. Nevertheless it’s still a major building block of security architecture. The choice can be hard, though. DeepSec 2014 features a talk by Daniel Sauder, giving you an idea why anti-virus software can  fail.

Someone who is starting to think about anti-virus evasion will see, that this can be reached easy (see for example last year’s DeepSec talk by Attila Marosi). If an attacker wants to hide a binary executable file with a Metasploit payload, the main points for accomplish this goal is mainly

  • encrypting/encoding the payload and have an own shellcode binder for escaping signature scanning, and
  • using a technique for evading the sandbox.

By developing further evasion techniques it is possible to research the internal functionality of anti-virus products. For example it can be determined whether a product is using x86 emulation or not, and what the emulation is capable of, and which Microsoft Windows API calls can disturb the anti-virus engine itself. Other tests include building an .exe file without a payload generated with msfpayload and well known attacking tools as well as 64-bit payloads and escaping techniques.

At the time of this writing Daniel Sauder developed 36 different techniques as proof of concept code and tested them against 8 different anti-virus products. More techniques and engines are pending. Together with documentation, papers, and talks from other researchers, this gives a deeper understanding for the functionality of anti-virus software and shows, where it is failing generally and in particular.

Anti-virus software is no magic solution that will always perfectly work. If you run filters of this kind, we recommend attending Daniel’s talk. Once you know how your defence mechanisms fail, you can work to improve them.