DeepSec 2024 Press Release: The limits of ‘AI’ language models lie in security. DeepSec warns: ‘AI’ language models generate content and override authorisations

Sanna/ June 4, 2024/ Conference, Press/ 0 comments

    Language model algorithms, also known as generative artificial intelligence, continue to celebrate their supposed triumphant advance through many media platforms. Security researchers have analysed the products and revealed a number of weaknesses in the ‘AI’ applications. This year’s DeepSec conference is dedicated to the threats posed by ‘AI’ learning models that use incomplete restrictions to analyse public and sensitive data. Large Language Models (LLMs) as Auto-Completion The technical description of the many ‘artificial intelligence’ (‘AI’) products on the market is impressive. In simple terms, the concept behind the advertising campaigns consists of algorithms that copy as much data as possible, break it down and then recombine it to provide answers to any questions. The learning process when creating the language model is not initially monitored or moderated. Only in later phases does

Read More