DeepSec Talk 2024: GenAI and Cybercrime: Separating Fact from Fiction – Candid Wüest

Sanna/ September 11, 2024/ Conference/ 0 comments

Are we standing at the brink of an AI Armageddon? With the rise of Generative AI (GenAI), cybercriminals allegedly now use unprecedented AI tools, flooding the digital world with sophisticated, unblockable threats. This talk aims to dissect the hype and uncover the reality behind the use of GenAI in cybercrime. We will explore the growing use of deepfakes in scams, exemplified by a million dollar fake BEC video conference call. From son-in-trouble scams to KYC bypass schemes, deepfakes are becoming versatile tools for cybercriminals and a nightmare for defenders. Turning to phishing attacks, we’ll discuss how GenAI personalizes and automates social engineering, significantly increasing the volume of attacks. However, they still require an account to send from and some payload. Having the ultimate phishing text does not mean you are not blocked. We’ll also

Read More

DeepSec 2024 Press Release: The limits of ‘AI’ language models lie in security. DeepSec warns: ‘AI’ language models generate content and override authorisations

Sanna/ June 4, 2024/ Conference, Press/ 0 comments

    Language model algorithms, also known as generative artificial intelligence, continue to celebrate their supposed triumphant advance through many media platforms. Security researchers have analysed the products and revealed a number of weaknesses in the ‘AI’ applications. This year’s DeepSec conference is dedicated to the threats posed by ‘AI’ learning models that use incomplete restrictions to analyse public and sensitive data. Large Language Models (LLMs) as Auto-Completion The technical description of the many ‘artificial intelligence’ (‘AI’) products on the market is impressive. In simple terms, the concept behind the advertising campaigns consists of algorithms that copy as much data as possible, break it down and then recombine it to provide answers to any questions. The learning process when creating the language model is not initially monitored or moderated. Only in later phases does

Read More