DeepSec 2014 Talk: Safer Six – IPv6 Security in a Nutshell

The Internet Protocol Version 6 (IPv6) is the successor to the currently main IP Version 4 (IPv4). IPv6 was designed to address the need for more addresses and for a better routing of packets in a world filled with billions of networks and addresses alike. Once you decide to develop a new protocol, you have the chance to avoid all the mistakes of the past. You can even design security features from the start. That’s the theory. In practice IPv6 has had its fair share of security problems. There has been research on this matter. Several vulnerabilities have been discussed at various security conferences. DeepSec 2014 features a presentation called Safer Six – IPv6 Security in a Nutshell held by Johanna Ullrich of SBA Research, a research centre for information security and based in Vienna. She answers questions about the content of the talk and the ongoing research in IPv6 security.

  • Please tell us the top 5 facts about your talk!
    IPv6 is the successor of nowadays IPv4 protocol and overcomes address depletion due to offering 2^128 distinct addresses. However, the protocol lacks from security and privacy and vulnerabilities are found in the novel extension headers, neighbour and multicast listener discovery or tunnelling. Analysing them, I infer three major challenges with respect to IPv6: First, all of today’s address formats have at least one serious shortcoming and effort is required for the development of a secure while maintainable addressing system. Second, security on the local network practically does not go beyond IPv4’s although a number of approaches have been presented. Last but not least, reconnaissance is still an advantageous aspect in networking and appropriate techniques have to be developed.
  • How did you come up with it? Was there something like an initial spark that set your mind on it?
    Writing my master thesis on the compression of secure communication in powerline systems, I encountered IPv6 for the first time. Starting at SBA Research afterwards, I was able to devote my first six months into intensive IPv6 studies including standards, scientific publications and community boards. I realized that an in-depth knowledge of the protocol requires a lot of time and people could benefit by providing this knowledge in a nutshell.
  • Why do you think this is an important topic?
    IP is THE Internet Protocol and the Internet a vital part of almost everybody’s life. So, I doubt that anybody will be able to go round IP’s new version 6. Is this single reason enough to convince you?
  • Is there something you want everybody to know – some good advice for our readers maybe? Except for “come to my talk”. :)
    Don’t condemn IPv6, but neither praise it to the skies. It is another protocol having its advantages and disadvantages.
  • A prediction for the future – what’s next? What do you think will be the next innovations or future downfalls – for IT-Security in general and / or particularly in your field of expertise?
    I am worried of today’s “Yes-we-can”-mentality of bringing everything online — your coffee machine, your car or automation systems or the smart grid. These systems have been developed being stand-alone, connecting them to the Internet in some way does violate their primary specification and may induce serious security risks. Even worse, are the threats induced by a vulnerability: While in traditional IT this might result in non-availability and economic loss, this may expand to life-threatening situations, e.g., in an automation system or your car.

Despite the fact that most of the Internet still uses IPv4, don’t forget that IPv6 is widely available by packet tunnels. Modern operating systems have built-in IPv6 connectivity by these tunnels, so the problems discussed in this presentation are not something you have to deal with in the future. Therefore we recommend Johanna’s talk for everyone using the Internet.

In addition we wish to point out that DeepSec 2014 also features an in-depth IPv6 security workshop titled IPv6 Attacks and Defenses – A Hands-on Workshop held by Enno Rey of ERNW GmbH.

DeepSec 2014 Workshop: Hacking Web Applications – Case Studies of Award-Winning Bugs

The World Wide Web has spread vastly since the 1990s. Web technology has developed a lot of methods, and the modern web site of today has little in common with the early static HTML shop windows. The Web can do more. A lot of applications can be accessed by web browsers, because it is easier in terms of having a client available on most platforms. Of course, sometimes things go wrong, bugs bite, and you might find your web application and its data exposed to the wrong hands. This is where you and your trainer Dawid Czagan come in. We offer you a Web Application Hacking training at DeepSec 2014.

Have you ever thought of hacking web applications for fun and profit? How about playing with authentic, award-winning bugs identified in some of the greatest companies? If that sounds like fun, then you should definitely join this workshop! Dawid will discuss bugs that he has found together with Michał Bentkowski in a number of bug bounty programs (including Google, Yahoo, Mozilla and others). You will learn how bug hunters think and how to hunt for bugs effectively. To be successful in bug hunting, you need to go beyond automated scanners. If you are not afraid of going into detail and doing manual/semi-automated analysis, then this workshop is for you.
You will be given a VMware image with a specially prepared environment to play with the bugs. What’s more, after the workshop is over, you are free to take it home and hack again, at whatever pace is best for you. To get the most of this workshop basic knowledge of web application security is needed. You should also have ever used a proxy, such as Burp, or similar, to analyse or modify the traffic.

Dawid is an experienced security researcher who has found security vulnerabilities in products by Google, Yahoo!, Mozilla, Microsoft, Twitter, BlackBerry and other companies. He will lead you through case studies of high profile and high impact flaws in the fabric of software exposed via HTTP(S). Cryptography doesn’t help you if the web logic behaves faulty. The training will show you how web application work, how they can be analysed, and how critical bugs look like. All you need is your laptop and a way to run the provided virtual images.

DeepSec 2014 Workshop: Understanding x86-64 Assembly for Reverse Engineering and Exploits

Assembly language is still a vital tool for software projects. While you can do a lot much easier with all the high level languages, the most successful exploits still use carefully designed opcodes. It’s basically just bytes that run on your CPU. The trick is to get the code into position, and there are lots of ways to do this. In case you are interested, we can recommend the training at DeepSec held by Xeno Kovah, Lead InfoSec Engineer at The MITRE Corporation.

Why should you be interested in assembly language? Well, doing reverse engineering and developing exploits is not all you can do with this knowledge. Inspecting code (or data that can be used to transport code in disguise) is part of information security. Everyone accepts a set of data from the outside world. Most commonly organisations, individuals, and companies consume web pages or receive email. As soon as you deal with filters, you need to worry about code hidden in data. You can get fancy and run an intrusion detection system, too. If you do, then you are in the business of dealing with opcodes – provided you look for exploits in the wild.

The training at DeepSec is especially interesting for penetration testers and everyone involved in defence. Analysing malicious software is a good example that combines defence and reverse engineering with assembly language. You really miss a lot of the things attackers try to tell you, if you don’t speak x86_64! The information gained will also help you to recognise and mitigate attacks. Plus it’s not as hard as you think. Despite x86 assembly having hundreds of special purpose instructions, you will be shown that it is possible to read most programs by knowing only around 20-30 instructions and their variations.

Don’t miss this training! It’s a rare occasion. Take advantage of it!

Once you register don’t forget to bring your laptop, a (Microsoft Visual C++ Express 2012) compiler and a way to run the provided Linux VM.

RandomPic XSA-108

What a couple of Infosec people thought about XSA-108.

cats-empty-plate 2-loop

Apparently some were a little bit disappointed that XSA-108 affects “only” HVM. Sorry, not another catastrophy, not another heartbleed, Shellshock or something in this class. Only a vulnerability which potentially allows access to other VMs.

Anyway, time for an update!

(Idea shamelessly stolen from aloria)

DeepSec 2014 Workshop: Suricata Intrusion Detection/Prevention Training

Getting to know what’s going on is a primary goal of information security. There is even a name for it: intrusion detection. And there are tools to do this. That’s the easy part. Once you have decided you want intrusion detection or intrusion prevention, the implementation part becomes a lot more difficult. Well, if you need help with this issue, there is a two-day workshop for you at DeepSec 2014 – the Suricata Training Event.

Suricata is a high performance Network Intrusion Detection System (IDS), Intrusion Prevention System (IPS) and Network Security Monitoring engine. It can serve pretty much all your needs. It’s Open Source (so it cannot be bought and removed from the market) and owned by a very active community. Suricata is managed by the non-profit foundation; the Open Information Security Foundation (OISF). OISF’s mission is to remain on the leading edge of open source IDS/IPS development to meet the ongoing needs of the community.
The two-day training event is held by core developers of Suricata. This means you get all the information on how intrusion detection works, how the rules can be created and adapted to your needs straight from the experts. Attending the workshop will give you not only a greater proficiency in Suricata’s core technology but will also have the unique opportunity to bring questions, challenges, and new ideas directly to Suricata’s developers. You will get the theory plus hands-on exercises with live packets and detection signatures. A sample of topics that will be covered over the course of the 2-day training include:

  • Compiling, installing, and configuring Suricata
  • Performance factors, rules and rule sets
  • Capture methods and performance
  • Event / data outputs and capture hardware
  • Troubleshooting common problems
  • Advanced tuning
  • Integration with other tools

If you own or use a network, then you should definitely be interested in IDS/IPS – get the packets before the packets get you! The workshop is tailored for developers, technologists, and security professionals. Even if you are new to Suricata or IDS in general, the training is a perfect starting point to get familiar with the topic. Make sure to book early, the number of tickets for all workshops is limited!

DeepSec 2014 Talk: A Myth or Reality – BIOS-based Hypervisor Threat

Backdoors are devious. Usually you have to look for them since someone has hidden or „forgotten“ them. Plus backdoors are very fashionable these days. You should definitely get one or more. Software is (very) easy to inspect for any rear entrances. Even if you don’t have access to the source code, you can deconstruct the bytes and eventually look for suspicious parts of the code. When it comes to hardware, things might get complicated. Accessing code stored in hardware can be complex. Besides it isn’t always clear which one of the little black chips holds the real code you are looking for. Since all of our devices we use every days runs on little black chips (the colour doesn’t matter, really), everyone with trust issues should make sure that control of these devices is in the right hands – or bytes.

DeepSec 2014 features a talk where BIOS-based hypervisors are discussed. There is ongoing research on this topic. Getting control of a computer’s BIOS or any other part of crucial firmware allows adversaries to either control or at least watch the code a machine is running. Imagine your firmware having an extra layer of virtualisation technology. Usually this is undesirable, especially if an unknown third party is accessing this layer. We asked a well-known Information Security Specialist to present the state of the research regarding these extra features in hardware. The talk will also introduce means to detect hidden hypervisors in firmware and give examples where these modifications were found in the wild. Once attackers go to extra sneaky on the stealth scale, you don’t always get the luxury of detecting backchannel activity. Aliens don’t always phone home, unfortunately.

Everyone using black boxes and computer chips should attend this talk. We know that avoiding unknown firmware and chip designs is hard (hence the term hardware), but you should pay attention to unauthorised modifications of these components. This is crucial if you use the hardware for your own (or someone else’s) infrastructure.

Keep in mind: The materials of this presentation have not been published before, and the research covers a period of more than a year. Drop by and raise your paranoia level! ☺

Back from 44CON – Conference Impressions

If you haven’t been at 44CON last week, you missed a lot of good presentations. Plus you haven’t been around great speakers, an excellent crew, “gin o’clock” each day, wonderful audience, and great coffee from ANTIPØDE (where you should go when in London and in desperate need of good coffee).

Everyone occasionally using wireless connections (regardless if Wi-Fi or mobile phone networks) should watch the talks on GreedyBTS and the improvements of doing Wi-Fi penetration testing by using fake alternative access points. GreedyBTS is a base transceiver station (BTS) enabling 2G/2.5G attacks by impersonating a BTS. Hacker Fantastic explained the theoretical background and demonstrated what a BTS-in-the-middle can do to Internet traffic of mobile phones. Intercepting and re-routing text messages and voice calls can be done, too. Implementing the detection of fake base stations is now a very good idea. Some specialised phones do this already. Recently an Austrian research team has published work on detecting interception equipment. Unless you use additional security mechanisms you should take a look at these technologies. Doesn’t hurt if you about it any way.
Hacking Wi-Fi got a serious boost by a presentation from Dominic White. The title was Manna from Heaven; Improving the state of wireless rogue AP attacks, and it showed the state of affairs of modern Wi-Fi hardware. Vendors have tried to defend against the attacks of the past. Especially when it comes to stripping SSL things such as the hard-coded root certificates in Google’s web browser make the life of a pen-tester hard. The updated toolbox called mana will help you to deal with modern Wi-Fi clients. Everyone using wireless communication should now about the risks involved. When in doubt, always remember that connecting to a wireless network acts as a strong form of exposure.

Conan Dooley talked about the challenges of running a network infrastructure at a hacker conference. Once you deal with very talented and creative people, then off-the-shelf solutions might not be the way to go. He offered very useful insights into the operation and gave helpful hints for anyone having to deal with a similar challenge. If it works for a hacker con, it will probably do some good for your enterprise network.

Joxean Koret demonstrated how to break the detection of malware by anti-virus. His examples shed light on the quality of the software. A lot of anti-virus products disable the protection mechanisms of the operating system in order to perform tests. This can lead to exposing the system to attack code in the worst case (paradoxically enabling malicious code to exploit the anti-virus software to gain a foothold). Again anti-virus filters aren’t the magical solution to malware entering your network. Joxean did a very good job showing this, and we recommend looking at the examples he gave in his presentation.

Speaking of incidents, you should think about them in advance. Steve Armstrong spoke about beginner’s incident handling mistakes and how to avoid them. Investigating the trails of attackers and throwing them out of your network/hosts is a task that relies on a lot more than technology. Step by step Steve explained the core failures and concepts. In addition he presented a tool called CyberCPR which enables response teams to collaborate and securely exchange information about a case. It’s still in its beta stage, but we suggest to give it a try.

We definitely look forward to attend 44CON next year! See you 8-11 September 2015 in London!

DeepSec 2014 Talk: Why Anti-Virus Software fails

Filtering inbound and outbound data is most certainly a part of your information security infrastructure. A prominent component are anti-virus content filters. Your desktop clients probably have one. Your emails will be first read by these filters. While techniques like this have been around for a long time, they regularly draw criticism. According to some opinions the concept of anti-virus is dead. Nevertheless it’s still a major building block of security architecture. The choice can be hard, though. DeepSec 2014 features a talk by Daniel Sauder, giving you an idea why anti-virus software can  fail.

Someone who is starting to think about anti-virus evasion will see, that this can be reached easy (see for example last year’s DeepSec talk by Attila Marosi). If an attacker wants to hide a binary executable file with a Metasploit payload, the main points for accomplish this goal is mainly

  • encrypting/encoding the payload and have an own shellcode binder for escaping signature scanning, and
  • using a technique for evading the sandbox.

By developing further evasion techniques it is possible to research the internal functionality of anti-virus products. For example it can be determined whether a product is using x86 emulation or not, and what the emulation is capable of, and which Microsoft Windows API calls can disturb the anti-virus engine itself. Other tests include building an .exe file without a payload generated with msfpayload and well known attacking tools as well as 64-bit payloads and escaping techniques.

At the time of this writing Daniel Sauder developed 36 different techniques as proof of concept code and tested them against 8 different anti-virus products. More techniques and engines are pending. Together with documentation, papers, and talks from other researchers, this gives a deeper understanding for the functionality of anti-virus software and shows, where it is failing generally and in particular.

Anti-virus software is no magic solution that will always perfectly work. If you run filters of this kind, we recommend attending Daniel’s talk. Once you know how your defence mechanisms fail, you can work to improve them.

DeepSec 2014 Talk: Advanced Powershell Threat – Lethal Client Side Attacks

Modern environments feature a lot of platforms that can execute code by a variety of frameworks. There are UNIX® shells, lots of interpreted languages, macros of all kinds (Office applications or otherwise), and there is the Microsoft Windows PowerShell. Once you find a client, you usually will find a suitable scripting engine. This is very important for defending networks and – of course – attacking them. Nikhil Mittal will present ways to use the PowerShell in order to attack networks from the inside via the exploitation of clients.

PowerShell is the “official” shell and scripting language for Windows. It is installed by default on all post-Vista Windows systems and is found even on XP and Windows 2003 machines in an enterprise network. Built on the .NET framework, PowerShell allows interaction with almost everything one finds in a Windows machine and network. One could access system registry, Windows API, WMI, COM objects, .NET libraries, access other machines on network and so on. It is very useful for system administrators which make it an ideal tool/platform for penetration testers as well.

PowerShell has various distinct advantages over binaries and other non-Windows scripting languages. It is trusted by the operating system, the system administrators and antivirus. It is possible to perform various attacks using PowerShell without dropping anything to the disk. Add to this, the ability to natively interact with the machine and the network and you have a tool for penetration tests which is too good to be true!

There has been much interesting work on usage of PowerShell in penetration tests. The talk will introduce Nishang, which is a toolkit for usage of PowerShell in penetration tests. It has scripts divided under following heads:

  • Backdoors – Contains, DNS, HTTP and SSID backdoors.
  • Escalation – Escalate privileges, introduce vulnerabilities.
  • Execution – Execute code in memory using DNS TXT records, get authenticated shell access to a MSSQL Server.
  • Gather – Log keys, get credentials in plain, check for open ports on a target, dump SAM file, WLAN keys in secret, LSA Secrets.
  • Pivot – Execute PowerShell commands and scripts on other machines in network.
  • Scan – Port Scan and Brute Force
  • Utility – Add persistence, exfiltrate data, encode scripts.
  • Powerpreter – A script module with almost all functionality of Nishang in single script.
  • Antak – A webshell in ASP.NET which utilizes PowerShell.

One frequently asked question by users of Nishang is this: How PowerShell could be used for getting access to a network? Could it be used for getting a foothold in an enterprise network? Yes, of course, use client side attacks.

In this presentation it will be demonstrated that a client side attack with PowerShell is very effective as it exploits human ignorance and uses features of PowerShell – both inherent to any enterprise network. The attacks demonstrated would be phishing (user clicks on a link), malicious attachments (MS Word and Excel), malicious shortcuts (.LNK file), attacks using Java applets and Human Interface Devices (HIDs).

There have been many instances of PowerShell being used by malware writers for client side attacks. Some notable examples, a Russian ransomware, for infecting MS Office files, and malicious short-cuts.

The PowerShell scripts that will be presented in this talk draw inspiration from some of the above attacks.

This talk should be attended by those who do external penetration tests and would like to know more about using PowerShell for this purpose. System Administrators should also attend this talk to understand the latest tools used by the attackers.

For anyone intending to dive deeper into the powers of the PowerShell, we strongly recommend booking Nikhils training course, also held at DeepSec 2014.

DeepSec 2014 Talk: Trusting Your Cloud Provider – Protecting Private Virtual Machines

The „Cloud“ technology has been in the news recently. No matter if you use „The Cloud™“ or any other technology for outsourcing data, processes and computing, you probably don’t want to forget about trust issues. Scattering all your documents across the Internet doesn’t require a „Cloud“ provider (you only need to click on that email with the lottery winnings). Outsourcing any part of your information technology sadly requires a trust relationship. How do you solve this problem? Armin Simma of the Vorarlberg University of Applied Sciences has some ideas and will present them at DeepSec 2014.

Th presentation shows a combination of technologies on how to make clouds trustworthy. One of the top inhibitors for not moving (virtual machines) to the cloud is security. Cloud customers do not fully trust cloud providers. The problem with sending virtual machines to the cloud is that „traditional“ encryption is no solution because encrypted data (equal to code in our case) can not be executed. Taking a closer look at the numerous surveys about cloud adoption (and at inhibitors for not moving to the cloud) it can be seen that insider attacks are ranked in the top critical attacks. The insider in our scenario is the administrator (or a user with high/elevated privileges).

The key point is: the cloud customer is not trusting the administrator of the cloud provider’s system.

The solution to this problem is based on

  1. Trusted Computing technology and
  2. Mandatory Access Control.

Mandatory Access Control is used to prevent the administrator – who must be able to administer and thus access the host system – from accessing Virtual Machines running on top of the cloud provider’s system(s). Our system is able to log all activities of users. Users (including the administrator himself/herself) are not able to manipulate this log.

The former technology – Trusted Computing – is used as a mechanism for giving the cloud customer a proof that the system hosting his Virtual Machines (= the cloud provider’s infrastructure) was not manipulated. The proof is hardware-based: it prevents several kinds of attacks e.g. rootkits or other BIOS-manipulating attacks. The proof is based on measuring all systems (system parts) which were executed since boot time of the physical machine. Each part is measured before execution. This „measurement chain“ is called Trusted Boot. A standardised tamper-resistant hardware (the TPM) plus a standardized protocol allows for the proof to the customer. The proof is called attestation.

A second technology used for securing the cloud is Trusted Computing’s sealing mechanism. Sealing is an extension of asymmetric encryption: the decryption is done within the hardware (TPM) but only if the current measurement values are equal to predefined reference values. The reference values are defined by the cloud customer and specify a known good system plus system configuration . These techniques (Trusted Boot, Attestation, Sealing) allow the cloud customer to be sure that a specific (trustworthy) system is running on the provider’s site.

Armin’s talk is of interested to implementers and users of „Cloud“ infrastructure alike. In case you are playing with Trusted Computing, you should attend his talk as well.

DeepSec 2014 Talk: An innovative and comprehensive Framework for Social Vulnerability Assessment

Do you get a lot of email? Do customers and business partners send you documents? Do you talk to people on the phone? Then you might be interested in an assessment of your vulnerability by social interactions. We are proud to host a presentation by Enrico Frumento of CEFRIEL covering this topic.

As anyone probably knows nowadays spear-phishing is probably the most effective threat, and it is often used as a first step of most sophisticated attacks. Even recent JP Morgan Chase’s latest data breach seems to be originated by a single employee (just one was enough!) who was targeted by a contextualized mail. Into this new scenario it is hence of paramount importance to consider the human factor into companies’ risk analysis. However, is any company potentially vulnerable to these kind attacks? How is it possible to evaluate this risk through a specific vulnerability assessment?

These are the questions that we will try to address. Since 2010, when we presented our study about Cognitive Approach for Social Engineering at the DeepSec 2010 conference, we are working on the extension of traditional security assessment, going beyond the technology and including the „Social“ context. In these years we had the opportunity to work on this topic with several European big enterprises, allowing us to face the difficulties related to the impact of this kind of activities on the relational issues between employees and employer both from the ethical and legal points of view.

This experience allowed us to develop a specific methodology for performing Social Vulnerability Assessment (SVA), ensuring ethical respect for employees and legal compliance with European work regulations and standards. The legal constraints, which shape the limits of what these assessments can investigate, are quite cumbersome to understand, but we developed a good experience, especially into the Italian legal framework, which allows the execution of these studies. We now regularly perform Social Vulnerability Assessments into the enterprises as an integrated service.
Using our methodology during these years, we performed about 15 Social Vulnerability Assessments in big enterprises with thousands of employees (a gross number of 10.000 people): this gave us a relevant first-hand sight on the real vulnerability of the enterprises against modern non-conventional security threats.

In this talk, we will share our experience, describing how we conduct the Social Vulnerability Assessment, and will present an overview of the results collected so far. These results may actually help to understand which is the risk level related to spear-phishing attacks inside companies and some conclusions may be unexpected.

We highly recommend attending this presentation if you have to face advanced attacks against your organisation.

DeepSec 2014 Talk: Build Yourself a Risk Assessment Tool

All good defences start with some good ideas. The is also true for information security. DeepSec 2014 features a presentation by Vlado Luknar who will give you decent hints and a guideline on how to approach the dreaded risk assessment with readily available tools. We have kindly asked Vlado to give you a detailed teaser on what to expect:

It seems fairly obvious that every discussion about information security starts with a risk assessment. Otherwise, how do we know what needs to be protected, how much effort and resources we should put into preventing security incidents and potential business disasters? With limited time and budget at hand we’d better know very well where to look first and what matters the most.

If we look at some opinion-making bodies in information security, such as ISF, ISACA or (ISC)², and of course, at ISO/IEC standards (the 27000 series), we can see that in information security there is no escape from risk assessment. A difficult question for anyone responsible for managing information security is to decide when to rely on best practice or baseline security controls and when to apply risk assessment and in what detail.

Risk assessment in information security is, at least in theory, quite self-evident. We have to

  • look at business objectives,
  • identify assets these objectives are built upon,
  • identify the core underlying systems that represent the assets,
  • estimate the potential impact (I) on these assets if something goes wrong (a threat materializes), and
  • identify threats to these assets by looking at related vulnerabilities and countermeasures eliminating them (probability – P).

In the end, we arrive at our risk R by calculating impact times probability (I×P), almost certainly identifying some actions to perform (i.e. measures to mitigate the risk, either its impact or probability).

Well, that’s the easy part. Now, we are supposed to repeat this risk assessment- either as soon as any of these variables changes (objective, asset, system, threat, vulnerability, control) or at least once per agreed-on period of time.

In general, existing security compliance frameworks (e.g. ISO/IEC 27001, COBIT, PCI DSS) do not dictate any specific risk assessment methodologies or tools. The common agreement seems to be: stick to the scheme of asset-threat-vulnerability-control and use whatever suits you to objectively reduce risks  – anything that allows you to be consistent, reproducible and measurable.

And this is the moment you start looking for a risk assessment tool, be it a commercial or an open-source one. In fact, your security team may be small and you may have no risk assessment specialist in there. Yet, you need to be structured about your security risk decisions in a easy-to-do and efficient manner every time you run an assessment hoping that if you don’t catch something important in this run, you will succeed in the next iteration, or an incident will teach you about it the hard way.

How many risk assessment methodologies for information security can we find on Internet today? If you filter out government- and consultancy-related methodologies (mostly without a tangible product you could buy) you end up with a list of 15-20 different methodologies with some of them comprising also a risk assessment tool. This might seem like a lot but only until you try to implement one of them in your organization.

You start with the first one and realize it needs weeks of training because its learning curve is so steep, the other uses awkward terminology, the next one has some strange threat categories mixing apples with oranges or uses just plain old references (talking about Windows 95,  fax machines and dial-up connections) and another one requires you to take a series of complicated installation and configuration steps (plus requiring a highly privileged account on your desktop). And yet another group of tools, instead of risk assessment, talks about scanning and vulnerability management.

In fact, if you are a security manager in a small to medium size company where every security topic passes through your hands, you might be better off by using your own tool because building it is not a rocket science and does not take that much time. You develop its features on the go and by doing it you capture all the internal knowledge that otherwise might be lost.

Over the years of being a security practitioner in a commercial organization I have learned few things.

  • The quest for a perfect ready-made tool is useless, because it is you and your team who add the most value in the process (the tool will not have the “knowledge” on its own).
  • In case you think you did find a perfect product built by someone else, think again: are you sure it is you – not the tool – who discovers risks?
  • Simple things work the best: the more complicated the tool, the less focused you stay on the subject-matter (i.e. the risks); not everything needs to be assessed thoroughly at the same time and at the same level of detail; not everything needs to be automated and re-calculated in real-time.
  • The tool should be pre-defined and structured only to some extent, not limiting your creativity and knowledge of the context (translated as your “internal expertise”).
  • Tools come and go and you may get stuck with a quite expensive product nobody supports any more and your own knowledge recorded in there may not be easily accessible.
  • When you hit the proper balance between a pre-defined, mandatory check-list and creativity, you’ll be focusing only on those things that are important the most, those that – in a given context – make or break the security of your product or service.

The presentation is not about latest advances in information security risk assessment. It will describe the process of making a practical, wiki-based tool, starting with publicly available resources: threats, vulnerabilities and controls (mostly from ISO/IEC 2700X) that are merged into a light ecosystem that is configured, improved and used only via a web browser. Such an ecosystem, actually an Information Security Management System called RISSCON, is used daily in a real company (Orange Slovensko a.s.) to keep up with the requirements of ISO/IEC 27001.

Tags: , , ,
Posted in Conference by . No Comments

DeepSec 2014 Talk: MLD Considered Harmful – Breaking Another IPv6 Subprotocol

In case you haven’t noticed, the Internet is getting crowded. Next to having billions of people online, their devices are starting to follow. Information security experts can’t wait to see this happen. The future relies on the Internet Protocol Version 6 (IPv6). IPv6 features a lot of improvements over IPv4. Since you cannot get complex stuff right at the first time, IPv6 brings some security implications with it. Past and present conferences have talked about this. DeepSec 2014 is no exception. Enno Rey of ERNW will talk about Multicast Listener Discovery (MLD) in his presentation.

The presentation is the first time that the results of an ongoing research of MLD are published. MLD is a protocol belonging to the IPv6 family, and sadly it features insecurities. MLD (Multicast Listener Discovery), and its successor, MLDv2, are used by IPv6 routers for discovering multicast listeners on a directly attached link, much like the Internet Group Management Protocol (IGMP) is used in IPv4. Even if you haven’t realised it yet, MLD is already everywhere. Many multicast applications, when the underlying layer-3 protocol is IPv6, base their operation on MLD, while most of the modern Operating Systems (OS), like Windows, Linux and FreeBSD, not only come pre-configured with IPv6 enabled, but they also start-up by sending MLD traffic, which is repeated periodically. Despite of the out-of-the-box usage of MLD, it is one of the IPv6 protocols that have not be studied yet to a suitable extend, especially as far as its potential security implications are concerned. As it will be shown, these can vary from OS fingerprinting on the local-link by sniffing the wire passively, to amplified Denial of Service attacks.

Specifically, in this talk, after presenting the results of the analysis of the default behaviour of some of the most popular OS, we will examine, by using specialised tools, whether the specific OS implementations conform to the security measures defined by the corresponding RFCs, and if not, what are the potential security implications. Then, by diving into the specifications of the protocol, we will discuss potential security issues related with the design of MLD and how they can be exploited by attackers. Finally, specific security mitigation techniques will be proposed to defend against them, which will allow us to secure IPv6 networks to the best possible extend in the emerging IPv6 era.

Let’s make the IPv6 world a safer place! ☻

For anyone dealing with IPv6, DeepSec 2014 also offers a two day training IPv6 Attacks and Defenses – A Hands-on Workshop, held by Enno himself.

Internet service providers already rolled out IPv6, especially for hosted or co-located environments. Furthermore IPv6 connectivity is widely available by means of tunnels (Teredo for example). This is why we recommend Enno’s talk and training for anyone using networks (Internet connection is optional since most operating systems use IPv6 locally any way). Deal with your network before attackers do!

DeepSec 2014 Keynote: The Measured CSO

It’s good if your organisation has someone to take on information security. However it’s bad if you are the person in this position. Few are lucky enough to actually deal with improving information security. And some are caught in compliance fighting an uphill struggle against regulations and audits that have nothing to do with the threats to your business.

The management of Information Security has become over-regulated and to some degree, over-focused on compliance to policy/regulation, architectural decisions, network access, and vulnerability management. As a result, many CISOs struggle to define success in terms that match the goals of their business, and struggle to make their risk management efforts relevant to senior executives.

How do you achieve that? Alex Hutton will tell you in his keynote talk at DeepSec 2014. His goal is for attendees to walk away after the talk with two things.  First, a technique they can take back to their jobs with them that will help them make the concept of “aligning security with the business” less of a platitude and more of a reality.  Second, we will discuss a new, threat-centric framework that uses metrics to understand how security operations reduces risk.

Curious? Cursed with being a CSO? Dreaming to become one someday? Having nightmares about metrics? Well, then you should probably avoid missing Alex’ keynote!

Tags: ,
Posted in Conference by . No Comments

EuroTrashSecurity Podcast – Microtrash37 : DeepSec 2014 Content

Microtrash37 of the EuroTrashSecurity podcast is out! We had a little talk with Chris about the schedule of DeepSec 2014 and what to expect. It’s a teaser for the blog articles about the talks and the trainings to come. We will describe more details on the blog, but you get a good overview what to expect from the audio.

We also got some inside information on the upcoming BSidesVienna 0x7DE. We will definitely attend and so should you! The BSidesVienna has some cool surprises for you. Don’t miss out on the chance to get together. The Call for Papers is still open! If you have something to share, please consider submitting a talk.