DeepSec 2024 Talk: Should You Let ChatGPT Control Your Browser? – Donato Capitella

Sanna/ September 19, 2024/ Conference/ 0 comments

This presentation will explore the practical risks associated with granting Large Language Models (LLMs) agency, enabling them to perform actions on behalf of users. We will delve into how attackers can exploit these capabilities in real-world scenarios. Specifically, the focus will be on an emerging use cases: autonomous browser and software engineering agents. The session will cover how LLM agents operate, the risks of indirect prompt injection, and strategies for mitigating these vulnerabilities.

We asked Donato a few more questions about his talk.

Please tell us the top 5 facts about your talk.

  1. LLM Red Teaming tools are benchmarks useful for LLM builders, but they are less useful to developers or application security testers
  2. When talking about “LLM Application Security”, we need to focus on the use-case the LLM application is enabling
  3. The talk shows practical examples of indirect prompt injection against LLM applications, such as autonomous agents
  4. Autonomous LLM agents are but a promise: in practice, there are many cyber security challenges we haven’t solved
  5. The talk offers a battle-tested set of controls to ship LLM applications to production as securely as possible

How did you come up with it? Was there something like an initial spark that set your mind on creating this talk?

The first time I used ChatGPT I was blown away, and I started looking into the underlying technology and studying Deep Learning. My focus wasn’t security at that point. I just wanted to know how it worked. Then, clients started implementing applications that used LLMs, so I started looking into how these could be exploited… and here I am!

Why do you think this is an important topic?

Everybody is sello-taping LLMs to every single application and product they make, with varying levels of success (however, I argue some use-cases are great). However, there are inherent issues with the technology, such as alignment and prompt injection. The more agency and power we give to LLMs, the higher the impact of these risks.

Is there something you want everybody to know – some good advice for our readers, maybe?

Shipping an LLM application into production requires more than just sending a prompt to an LLM. There’s an entire security pipeline to build and maintain around the LLM. Most people miss this and expose their customers and organisations to risks.

A prediction for the future – what do you think will be the next innovations or future downfalls when it comes to your field of expertise / the topic of your talk in particular?

Aligning current LLMs is much harder than it intuitively looks like. Possibly intractable. We’ll likely need some other advancement to achieve higher levels of intelligence – and thus eliminate jailbreak/prompt injection vulnerabilities, which are just a facet of reasoning issues. LLMs might be part of the solution or not, but they are unlikely to be the end game.

 

Donatos work includes leading penetration testing for web applications and networks, and conducting adversary simulation and purple team activities. Recently, his research has focused on the security of autonomous agents created using Large Language Models (GenAI). Additionally, Donato has developed and delivered various training courses, including WithSecure’s Secure Software Engineering, to enhance industry knowledge and promote continuous learning.

Share this Post

Leave a Comment

Your email address will not be published. Required fields are marked *

*
*

This site uses Akismet to reduce spam. Learn how your comment data is processed.