DeepSec Talk 2024: GenAI and Cybercrime: Separating Fact from Fiction – Candid Wüest

Sanna/ September 11, 2024/ Conference/ 0 comments

Are we standing at the brink of an AI Armageddon? With the rise of Generative AI (GenAI), cybercriminals allegedly now use unprecedented AI tools, flooding the digital world with sophisticated, unblockable threats. This talk aims to dissect the hype and uncover the reality behind the use of GenAI in cybercrime.

We will explore the growing use of deepfakes in scams, exemplified by a million dollar fake BEC video conference call. From son-in-trouble scams to KYC bypass schemes, deepfakes are becoming versatile tools for cybercriminals and a nightmare for defenders. Turning to phishing attacks, we’ll discuss how GenAI personalizes and automates social engineering, significantly increasing the volume of attacks. However, they still require an account to send from and some payload. Having the ultimate phishing text does not mean you are not blocked. We’ll also showcase how GenAI can generate basic malware, similar to malware toolkits, and explore advanced threats like polymorphic/metamorphic malware that dynamically adapts in real time. By clarifying the differences between AI-generated, AI-aided, and AI-powered threats, we reveal that while GenAI facilitates threat distribution, true AI-powered malware is still rare. While GenAI scales and speeds up attacks, it does not fundamentally create completely new threat patterns, allowing behavior based and anomaly detections to remain effective against them. We will discuss additional threat concepts such as the self-replicating indirect prompt injection worm Morris II, which exploits filtering weaknesses in Retrieval Augmented Generation (RAG) systems and AI apps. In conclusion, we will draw the balance between current AI-powered threats and the defender, and highlight future research areas, such as finding and exploiting zero-day vulnerabilities and supply chain attacks against GenAI by targeting Pickles and Python.

Join us to learn the difference between the hype and the real impact of AI in cybercrime, and understand what this means for the future of cybersecurity.

We asked Candid a few more questions about his talk.

Please tell us the top 5 facts about your talk.

  1. Generative AI is increasingly being used by cybercriminals in their attacks, but it is not (yet) as devastating as the press suggests.
  2. Convincing deepfakes are being used in high-profile scams and are becoming versatile tools for cybercriminals.
  3. Generative AI is enhancing phishing attacks by personalizing and automating social engineering, though such attacks still face traditional limitations.
  4. The primary benefit of generative AI for attackers is the increased frequency of attacks, automation, and scalability.
  5. While generative AI can create basic malware and facilitate polymorphic threats, truly AI-powered malware remains rare, with behavior-based detection still proving effective.

How did you come up with it? Was there something like an initial spark that set your mind on creating this talk?

The idea for this talk naturally evolved from my longstanding fascination with AI and machine learning (AI/ML) models, a passion that began well before the appearance of tools like ChatGPT. In my work, I’ve been involved in developing various AI/ML-based malware classification and detection tools, as well as studying bypass techniques that cybercriminals might use. So, this field has always greatly interested me. This year, I found myself increasingly approached by customers and journalists eager to hear my thoughts on the implications of generative AI in the realm of cybercrime. What really pushed me to create this talk was my growing frustration with the sensationalist, doomsday headlines that seemed to dominate the news—stories that painted a picture of an impending AI-driven cyber apocalypse. Despite all the hype, I had observed none significant shifts in cyberattack patterns that would justify such alarm. This motivated me to dive deeper into the subject and explore where the true reality lies, separating fact from fiction in the discussion around AI in cybersecurity.

Why do you think this is an important topic?

It is important to know the current state in order to focus on the important parts to defend against. I believe this is an important topic because the rise of generative AI has sparked both excitement and fear. As someone deeply involved in AI/ML-based threat detection, I’ve seen firsthand how powerful these technologies can be. However, I’ve also noticed a growing trend of exaggerated threats portrayed in the media. This gap in understanding can lead to unnecessary panic, misinformed decisions, and potentially even a misallocation of resources in cybersecurity. By addressing this topic, I aim to provide a balanced perspective, highlighting both the actual risks and the limitations of generative AI in cybercrime. It’s crucial that we approach this evolving landscape with a clear, informed mindset so that we can effectively protect our digital world without falling prey to sensationalism.

Is there something you want everybody to know – some good advice for our readers, maybe?

Don’t fear generative AI—get comfortable with it; it’s here to stay. Use it as a tool to speed up and improve your work, but don’t think that it will generate a whole new world of cyberattacks that are undetectable.

A prediction for the future – what do you think will be the next innovations or future downfalls when it comes to your field of expertise / the topic of your talk in particular?

In the future, I believe we’ll see significant innovations in both the offensive and defensive applications of AI in cybersecurity. Generative AI will undoubtedly evolve further and be used by attackers to facilitate their attacks. Deepfakes will create a shift in authentication and trust of information, while AI models themselves will be attacked from every possible angle, from indirect prompt injection abusing RAG systems to manipulate components in supply chain attacks. I expect the development of more advanced AI-powered tools that can autonomously adapt and evolve. Newer AI models with a structured approach to reasoning and goal achievement will complete more complex tasks. Initiatives such as the AIxCC from DARPA will boost research even further. In the end, we will move into a period of AI vs. AI, where AI will be needed to defend quickly enough against the tsunami of unknown attacks.

 

Candid Wuest is an experienced cybersecurity expert with a strong blend of technical skills and over 25 years of passion in security. He currently works as an independent security advisor for various companies and the Swiss government. Previously, he was the VP of Cyber Protection Research at Acronis, where he led the creation of the security department and the development of their EDR product. Before that, he spent over sixteen years building Symantec’s global security response team as the tech lead, analyzing malware and threats – from NetSky to Stuxnet. Wuest has published a book and various whitepapers and has been featured as a security expert in top-tier media outlets. He is a frequent speaker at security-related conferences, including RSAC and BlackHat, and organizer of AREA41 and BSidesZurich. He learned to code and the English language on a Commodore 64. With a Master of Computer Science from ETH Zurich, he possesses many patents and worthless certifications.

Share this Post

Leave a Comment

Your email address will not be published. Required fields are marked *

*
*

This site uses Akismet to reduce spam. Learn how your comment data is processed.