DeepSec and DeepINTEL 2024 Call for Papers is open
The call for papers is open! DeepSec and DeepINTEL are waiting for your input. We are looking for your talks and trainings. Tell us what you found and tell our trainees how to defend against attacks. Please submit proposals for trainings as early as possible. We try to fill at least half of the trainings slots before the Summer, so interested persons have some more time to plan their attendance. Our main aim for 2024 is to examine the weaknesses of Large Language Models (LLMs) and explore their potential for exploitation. The obvious way is to use the prompt, but there are ways to influence of poison the training data. We have seen publications and nascent source code doing this. The less obvious way of weaponising these algorithms is to spread disinformation. Generated content