Science Fictions meets Large Language Models

René Pfeiffer/ May 25, 2024/ Conference/ 0 comments

Generated picture, prompt was: james bond being chased by a t-800 terminator robot from futurama, thriller, vibrant color gradingGiven the advertising of the manufacturers using a Large Language Model (LLM) is just like having a conversation with a person. The reality looks different. Google’s AI search has recently recommended to glue pizza together, eats rocks, or jump off the Golden Gate bridge when being depressed. This is clearly bad advice. Apparently these answers are or were part of the learning process. Incidents like this and the hallucinations of LLM algorithms have been discussed already. Science fictions fans will recall conversations with computer or AI simulations where someone tries to trick the machine do override security checks. The Open Worldwide Application Security Project (OWASP) created a list of threats to LLM applications. The target audience are developers, designers, architects, managers, and organizations.

Marc Pesce wrote an article about tests with different LLM implementations. He found ways to force the application to spit out garbage. The tests were performed against a selection of chat bots. Every single time there was a prompt that could break the conversation and make the chat bot reply with nonsense. The problem is still being investigated by vendors, but it looks like a structural problem in the algorithms. Debugging effects like this is hard. LLMs are not simple pieces of code. Furthermore, you need to take the learning data into consideration.

Using less data during the learning phase may create a better filter system. Unfortunately, the programming assistants do not fare any better. A study fed 517 questions from a tech forum to ChatGPT. 52% of the answer contained misinformation. That make the tool unsuitable as an assistant. Integrated development environments need to be fast and accurate. Looking up answers that you still need to fact check is already part of the current process. This doesn’t require a new tool.

We are hoping to see your LLM exploits and experiences at DeepSec in November. See you!

Share this Post

About René Pfeiffer

System administrator, lecturer, hacker, security consultant, technical writer and DeepSec organisation team member. Has done some particle physics, too. Prefers encrypted messages for the sake of admiring the mathematical algorithms at work.

Leave a Comment

Your email address will not be published. Required fields are marked *

*
*

This site uses Akismet to reduce spam. Learn how your comment data is processed.