DeepSec 2024 Talk: Detecting Phishing using Visual Similarity – Josh Pyorre

Sanna/ October 10, 2024/ Conference/ 0 comments

Current phishing detection methods include analyzing URL reputation and patterns, hosting infrastructure, and file signatures. However, these approaches may not always detect phishing pages that mimic the look and feel of previously observed attacks. This talk explores an approach to detecting similar phishing pages by creating a corpus of visual fingerprints from known malicious sites. By taking screenshots, calculating hash values, and storing metadata, a reference library can compare against newly crawled suspicious URLs. By combining fuzzy searches and OCR techniques with other methods, we can identify similar matches. We asked Josh a few more questions about his talk. Please tell us the top 5 facts about your talk. In security, URL block lists are widely used, but I rarely see people utilizing a database of visual information to hunt for phishing attacks that

Read More

DeepSec 2024 Talk: From Dungeon Crawling to Cyber Defense Drill: Using RPG Principles and LLM for Operational Team Dev – Aurélien Denis & Charles Garang

Sanna/ September 23, 2024/ Conference/ 0 comments

Continuous improvement/training is in the DNA of cybersecurity professionals, specifically for incident responders, which are always searching for new ways to learn and practice their technical and analytical crafts. This is even more the case in mature environments where Incident response teams may find themselves in a situation with few high stakes incidents, preventing them from applying their technical and thinking skills, thus lowering their readiness when a crisis occur. LLMs based conversational agents are becoming mainstream, and applications are countless. In the meantime, Tabletop Role-Playing Games (TTRPG) are found to be a great breeding ground for creativity and fun. To achieve the benefits of this game, preparation is needed and a game master must be present to keep the players engaged. So we leveraged the power of AI, mixed automation and past experiences

Read More

DeepSec 2024 Talk: Should You Let ChatGPT Control Your Browser? – Donato Capitella

Sanna/ September 19, 2024/ Conference/ 0 comments

This presentation will explore the practical risks associated with granting Large Language Models (LLMs) agency, enabling them to perform actions on behalf of users. We will delve into how attackers can exploit these capabilities in real-world scenarios. Specifically, the focus will be on an emerging use cases: autonomous browser and software engineering agents. The session will cover how LLM agents operate, the risks of indirect prompt injection, and strategies for mitigating these vulnerabilities. We asked Donato a few more questions about his talk. Please tell us the top 5 facts about your talk. LLM Red Teaming tools are benchmarks useful for LLM builders, but they are less useful to developers or application security testers When talking about “LLM Application Security”, we need to focus on the use-case the LLM application is enabling The talk

Read More

Science Fictions meets Large Language Models

René Pfeiffer/ May 25, 2024/ Conference/ 0 comments

Given the advertising of the manufacturers using a Large Language Model (LLM) is just like having a conversation with a person. The reality looks different. Google’s AI search has recently recommended to glue pizza together, eats rocks, or jump off the Golden Gate bridge when being depressed. This is clearly bad advice. Apparently these answers are or were part of the learning process. Incidents like this and the hallucinations of LLM algorithms have been discussed already. Science fictions fans will recall conversations with computer or AI simulations where someone tries to trick the machine do override security checks. The Open Worldwide Application Security Project (OWASP) created a list of threats to LLM applications. The target audience are developers, designers, architects, managers, and organizations. Marc Pesce wrote an article about tests with different LLM implementations.

Read More

DeepSec Scuttlebutt: Fun with Fuzzing, LLMs, and Backdoors

René Pfeiffer/ July 31, 2023/ Call for Papers, Scuttlebutt

[This is the blog version of our monthly DeepSec Scuttlebutt musings. You can subscribe to the DeepSec Scuttlebug mailing list, if you want to read the content directly in your email client.] Dear readers, the Summer temperatures are rising. The year 2023 features the highest measured temperatures in measurement history. This is no surprise. The models predicting what we see and feel now have been created in the 1970s by Exxon. So far, the model has been quite accurate. What has this to do with information security? Well, infosec also uses models for attack and defence, too. The principles of information security has stayed the same, despite the various trends. These are the building blocks of our security models. They can be adapted, but the overall principles have little changed from two-hosts-networks to the

Read More