Science Fictions meets Large Language Models

René Pfeiffer/ May 25, 2024/ Conference/ 0 comments

Given the advertising of the manufacturers using a Large Language Model (LLM) is just like having a conversation with a person. The reality looks different. Google’s AI search has recently recommended to glue pizza together, eats rocks, or jump off the Golden Gate bridge when being depressed. This is clearly bad advice. Apparently these answers are or were part of the learning process. Incidents like this and the hallucinations of LLM algorithms have been discussed already. Science fictions fans will recall conversations with computer or AI simulations where someone tries to trick the machine do override security checks. The Open Worldwide Application Security Project (OWASP) created a list of threats to LLM applications. The target audience are developers, designers, architects, managers, and organizations. Marc Pesce wrote an article about tests with different LLM implementations.

Read More

DeepSec Scuttlebutt: Fun with Fuzzing, LLMs, and Backdoors

René Pfeiffer/ July 31, 2023/ Call for Papers, Scuttlebutt

[This is the blog version of our monthly DeepSec Scuttlebutt musings. You can subscribe to the DeepSec Scuttlebug mailing list, if you want to read the content directly in your email client.] Dear readers, the Summer temperatures are rising. The year 2023 features the highest measured temperatures in measurement history. This is no surprise. The models predicting what we see and feel now have been created in the 1970s by Exxon. So far, the model has been quite accurate. What has this to do with information security? Well, infosec also uses models for attack and defence, too. The principles of information security has stayed the same, despite the various trends. These are the building blocks of our security models. They can be adapted, but the overall principles have little changed from two-hosts-networks to the

Read More