DeepSec 2024 Press Release: The limits of ‘AI’ language models lie in security. DeepSec warns: ‘AI’ language models generate content and override authorisations

Sanna/ June 4, 2024/ Conference, Press/ 0 comments

    Language model algorithms, also known as generative artificial intelligence, continue to celebrate their supposed triumphant advance through many media platforms. Security researchers have analysed the products and revealed a number of weaknesses in the ‘AI’ applications. This year’s DeepSec conference is dedicated to the threats posed by ‘AI’ learning models that use incomplete restrictions to analyse public and sensitive data. Large Language Models (LLMs) as Auto-Completion The technical description of the many ‘artificial intelligence’ (‘AI’) products on the market is impressive. In simple terms, the concept behind the advertising campaigns consists of algorithms that copy as much data as possible, break it down and then recombine it to provide answers to any questions. The learning process when creating the language model is not initially monitored or moderated. Only in later phases does

Read More

Science Fictions meets Large Language Models

René Pfeiffer/ May 25, 2024/ Conference/ 0 comments

Given the advertising of the manufacturers using a Large Language Model (LLM) is just like having a conversation with a person. The reality looks different. Google’s AI search has recently recommended to glue pizza together, eats rocks, or jump off the Golden Gate bridge when being depressed. This is clearly bad advice. Apparently these answers are or were part of the learning process. Incidents like this and the hallucinations of LLM algorithms have been discussed already. Science fictions fans will recall conversations with computer or AI simulations where someone tries to trick the machine do override security checks. The Open Worldwide Application Security Project (OWASP) created a list of threats to LLM applications. The target audience are developers, designers, architects, managers, and organizations. Marc Pesce wrote an article about tests with different LLM implementations.

Read More

DeepSec 2023 Talk: Automating Incident Response: Exploring the Latest Conversational AI Tools – Hagai Shapira

Sanna/ September 6, 2023/ Conference

As security incidents become increasingly complex, it’s crucial for SOC and incident response teams to focus on actual malicious investigations. However, their ability to do so is often limited by time-consuming human interactions with stakeholders. In this talk, we’ll explore different levels of automation approaches for incident response, culminating in the latest additions of conversational AI tools. These tools enable full investigations with human stakeholders to be performed automatically, with an analyst only as a silent observer/supervisor. We’ll discuss the benefits and limitations of using conversational AI tools in incident response, as well as real-world examples of how these tools have been used effectively. By the end of the talk, attendees will have a better understanding of how to leverage this technology to streamline their incident response processes and improve their overall security posture.

Read More