DeepSec 2024 Talk: A Practical Approach to Generative AI Security – Florian Grunow & Hannes Mohr

Sanna/ September 12, 2024/ Conference/ 0 comments

The rise of applications based on AI (mostly generative AI) forces us to think about the security and privacy implications of these systems. We will try to make sense about the attack surface of generative AI applications, what practitioners in the field need to consider in development and operations, and how they can derive security measures for these systems. We will first dive into the range of generative AI applications using examples of the OpenAI ecosystem. This will give the audience an understanding about the fundamental problem of AI from a security perspective. We then offer an insight into the attack surface that those applications have. This will help understand what needs to be secured and what can be secured. Many times, good old security best practices will be a good start, although AI

Read More

DeepSec Talk 2024: GenAI and Cybercrime: Separating Fact from Fiction – Candid Wüest

Sanna/ September 11, 2024/ Conference/ 0 comments

Are we standing at the brink of an AI Armageddon? With the rise of Generative AI (GenAI), cybercriminals allegedly now use unprecedented AI tools, flooding the digital world with sophisticated, unblockable threats. This talk aims to dissect the hype and uncover the reality behind the use of GenAI in cybercrime. We will explore the growing use of deepfakes in scams, exemplified by a million dollar fake BEC video conference call. From son-in-trouble scams to KYC bypass schemes, deepfakes are becoming versatile tools for cybercriminals and a nightmare for defenders. Turning to phishing attacks, we’ll discuss how GenAI personalizes and automates social engineering, significantly increasing the volume of attacks. However, they still require an account to send from and some payload. Having the ultimate phishing text does not mean you are not blocked. We’ll also

Read More

DeepSec Talk 2024: Firmware Forensics: Analyzing Malware Embedded in Device Firmware – Diyar Saadi Ali

Sanna/ September 10, 2024/ Conference/ 0 comments

Firmware, essential to hardware functionality, increasingly becomes a prime target for cyber threat actors because of its foundational control over devices. This presentation delves into a detailed analysis of malware embedded within purported firmware updates for Sabrent devices, a case study revealing widespread exploitation. By leveraging advanced static and dynamic analysis techniques, we uncover the intricate workings of this malware, strategically hidden within seemingly legitimate firmware patches. Through meticulous investigation, including static examination for file headers, hashes, and embedded resources, and dynamic analysis within controlled environments, we decipher the malware’s operational stages. This includes its initial execution triggers, subsequent macro-driven deployments, and ultimate persistence mechanisms through registry modifications, all orchestrated to evade detection and ensure prolonged access to compromised systems. We asked Diyar a few more questions about his talk. Please tell us the

Read More

DeepSec 2024 Training: The Mobile Playbook: Dissecting iOS and Android Apps – Sven Schleier

Sanna/ September 9, 2024/ Conference, Training/ 0 comments

This course teaches you how to analyse Android and iOS apps for security vulnerabilities, by going through the different phases of testing, including dynamic testing, static analysis and reverse engineering. Sven will share his experience and many small tips and tricks to attack mobile apps that he collected throughout his career and bug hunting adventures. We asked Sven a few more questions about his training. Please tell us the top 5 facts about your training. Focus: The course teaches penetration testing of Android and iOS apps using the OWASP Mobile Application Security Testing Guide (MASTG). The OWASP MASTG is an open-source documentation project that summarises techniques for penetration testing and reverse engineering of mobile apps. Hands-on Experience: We will go through many labs and real-world scenarios with customized apps. Many of the labs can

Read More

DeepSec 2024 Training: “Look What You Made Me Do”: The Psychology behind Social Engineering & Human Intelligence Operations – Christina Lekati

Sanna/ August 26, 2024/ Conference/ 0 comments

Social Engineering and Human Intelligence (HUMINT) operations both rely heavily on effectively navigating a person’s mind in order to steer their behavior. As simple as this sounds, “quick and dirty” influence tactics will not take an operator very far. Behavior engineering is a complex, multilayered process that requires a good understanding of human psychology and self-awareness. In this intensive masterclass, participants will get access to the underlying psychology responsible for the way people think, decide, and act. They will also learn to influence and reshape all three layers. What are people’s automatic triggers? How can you engineer predictable action-reaction responses that produce a desirable outcome? How do you cultivate a target into taking specific actions or divulging information? But also, what are the ethical boundaries and moral implications of this process? The class will

Read More

DeepSec 2024 Training: Hacking Modern Web & Desktop Apps: Master the Future of Attack Vectors – Abraham Aranguren

Sanna/ August 23, 2024/ Conference, Training/ 0 comments

This course is the culmination of years of experience gained via practical penetration testing of Modern Web and Desktop applications and countless hours spent doing research. We have structured this course around the OWASP Security Testing Guide. It covers the OWASP Top Ten and specific attack vectors against Modern Web and Desktop apps. Participants in this course can immediately apply actionable skills from day 1. Please note our courses are 100% hands-on. We do not lecture students with boring bullet points and theories, instead we give you practical challenges and help you solve them, teaching you how to troubleshoot common issues and get the most out of this training. The training then continues after the course through our frequently updated training portal, for which you keep lifetime access, as well as unlimited email support.

Read More

DeepSec 2024 Training: AI SecureOps: Attacking & Defending GenAI Applications and Services – Abhinav Singh

Sanna/ August 22, 2024/ Conference, Training/ 0 comments

Acquire hands-on experience in GenAI and LLM security through CTF-styled training, tailored to real-world attacks and defense scenarios. Dive into protecting both public and private GenAI & LLM solutions, crafting specialized models for distinct security challenges. Excel in red and blue team strategies, create robust LLM defenses, and enforce ethical AI standards across enterprise services. This training covers both “Securing GenAI” and “Using GenAI for security” for a well-rounded understanding of the complexities involved in AI-driven security landscapes. We asked Abhinav a few more questions about his training. Please tell us the top facts about your talk. It covers both aspects of AI security: 1. Using AI for security; 2: Security of AI. How did you come up with it? Was there something like an initial spark that set your mind on creating this

Read More

DeepSec 2024 Training: Attacking and Defending Private 5G Cores – Altaf Shaik

Sanna/ August 21, 2024/ Conference, Training/ 0 comments

Security is paramount in private 5G networks because of their tailored nature for enterprises. They handle sensitive data, connect mission-critical devices, and are integral to operations. This advanced 5G Core Security Training is a comprehensive program designed to equip security professionals with advanced skills and techniques to identify and mitigate potential security threats in private 5G networks. Participants will gain a deep understanding of 5G core security and protocols, and learn how to develop and use the latest 5G pen testing tools and techniques to perform vulnerability assessments and exploit development. The training will also cover the latest 5G security challenges and best practices, and provide participants with hands-on experience in simulating original attacks and defenses on a local zero-RF-transmitting 5G network. We asked Altaf a few more questions about his training. Please tell

Read More

DeepSec Training 2024: Software Reverse Engineering Training Course for Beginners – Balazs Bucsay

Sanna/ August 20, 2024/ Conference, Training/ 0 comments

The training course targets attendees who have little to no knowledge of reverse engineering but possess the ability to write simple programs in a programming language of their choice and also have a desire to learn reverse engineering of compiled applications. The course spans two days, during which low-level computing and the basics of architectures are explained. The primary target architectures of this course are Intel x86 and AMD x64, where we cover the fundamentals of computing and assembly language. Throughout the course, we will explore how to create basic programs in both C and assembly, and then explore the process of reverse engineering using disassembler, decompiler and debugger on Windows. Each day of the course emphasises hands-on labs, allowing participants to apply their newly gained knowledge in practical exercises. Theory alone quickly fades,

Read More

DeepSec publishes preliminary Schedule

René Pfeiffer/ August 19, 2024/ Conference, DeepIntel/ 0 comments

We are happy and proud to present the preliminary schedule for DeepSec 2024! Again, the many submissions overwhelmed us. We could have filled the schedules for two or three conferences. The range of topics matches current events, your needs for improving digital defence, and insights into how vulnerable systems and humans are. Organic intelligence created all presentations 100%. No previous instructions had to be ignored. Also look at the trainings. We have selected very useful topics and feature expert trainers to guide you through the content. As usual, the schedule for DeepINTEL is only available on request. We hope to see you all in Vienna!

DeepSec Call for Papers has officially ended – Review Phase opened

René Pfeiffer/ August 1, 2024/ Call for Papers, Conference/ 0 comments

The call for papers process for DeepSec has officially ended. We tried to keep track with your submissions, but now we will deep dive into the review phase. You may have noticed that the trainings have already been published online. Usually, we publish the training slots earlier. We try to do this before the Summer, but this year the training review was delayed, because all reviewers were very busy. Now we have even more work because of the number of proposals for talks. Thank you all for your contributions! Creating the schedule will be hard, so bear with us and allow for one to two weeks for the reviews. We promise that all of you will receive either a confirmation when accepted or a message if your submission was declined. Don’t be discouraged or

Read More

Reminder: Call for Papers DeepSec/DeepINTEL is still open!

René Pfeiffer/ July 12, 2024/ Conference/ 0 comments

It’s this time of the year again where the hot weather and deadlines collide. The call for papers for both DeepSec and DeepINTEL is still open! We are looking for original content, your creative ideas, and your invaluable experience. Please submit your proposal to our CfP manager. As always, we have a variety of topics we are interested in. The wonderful world of „artificial intelligence“ has taken the world and its CO2 output by storm. Large Language Models (LLMs) have „learned“ the Internet multiple times. Companies offering their LLM-based services promise to solve all kinds of tasks. What does this mean for IT security? Disinformation and propaganda are a big topic. Europe has already seen elections where structural disinformation played (and plays!) a vital role. Using false information in order to influence the voting

Read More

IT Security, Standards, and Compliance

René Pfeiffer/ July 12, 2024/ Call for Papers, Conference, Legal/ 0 comments

You can often see the classic divide between technical and compliance persons in information technology within teams or organisations. Writing guidelines and writing configurations for implementation seem very different, with no overlaps. In reality, everyone has procedures. While they might not be written or follow a standardized format, having your ways of doing things is crucial to succeed in IT. The same goes for security. Creating policy documents and describing procedures in a way that technical minds can actually use them is a challenge. There is a crossover with the profession of writers who are experts in conveying nonfiction stories. And this is the origin of the schism between technicians and the compliance world. Badly written policies are a security risk, because no one takes them seriously. The purpose of your procedure documentation is

Read More

DeepSec 2024 Press Release: The limits of ‘AI’ language models lie in security. DeepSec warns: ‘AI’ language models generate content and override authorisations

Sanna/ June 4, 2024/ Conference, Press

    Language model algorithms, also known as generative artificial intelligence, continue to celebrate their supposed triumphant advance through many media platforms. Security researchers have analysed the products and revealed a number of weaknesses in the ‘AI’ applications. This year’s DeepSec conference is dedicated to the threats posed by ‘AI’ learning models that use incomplete restrictions to analyse public and sensitive data. Large Language Models (LLMs) as Auto-Completion The technical description of the many ‘artificial intelligence’ (‘AI’) products on the market is impressive. In simple terms, the concept behind the advertising campaigns consists of algorithms that copy as much data as possible, break it down and then recombine it to provide answers to any questions. The learning process when creating the language model is not initially monitored or moderated. Only in later phases does

Read More

Science Fictions meets Large Language Models

René Pfeiffer/ May 25, 2024/ Conference

Given the advertising of the manufacturers using a Large Language Model (LLM) is just like having a conversation with a person. The reality looks different. Google’s AI search has recently recommended to glue pizza together, eats rocks, or jump off the Golden Gate bridge when being depressed. This is clearly bad advice. Apparently these answers are or were part of the learning process. Incidents like this and the hallucinations of LLM algorithms have been discussed already. Science fictions fans will recall conversations with computer or AI simulations where someone tries to trick the machine do override security checks. The Open Worldwide Application Security Project (OWASP) created a list of threats to LLM applications. The target audience are developers, designers, architects, managers, and organizations. Marc Pesce wrote an article about tests with different LLM implementations.

Read More