Understanding Artificial Intelligence, its Use Cases, and Security Implications

René Pfeiffer/ May 15, 2023/ Conference

Photorealistic view of the inside of a modern microprocessor according to the Midjourney algorithm. It shows a circuit in steampunk style.Hypes and trends are great. You can talk a lot about s specific topic without really understanding the underlying technology. Ever since the AI train has left the station, everyone is talking about it and is trying to solve all kinds of problem with a single algorithmic approach. Large language models (LLMs) are apparently the best invention since division and multiplication. While there is nothing wrong with exploring how technology can be used, the current discussion about the use of AI algorithms has drifted to shamanism. Companies want to feature one of these new algorithms for good luck, promising business models and to save all kinds of effort when dealing with data. Let’s take a step back and review the history of artificial intelligence in computer science.

In the 1970s and 1980s expert systems were all the fashion. The idea was to determine all the questions and branches of a question/answer dialogue in order to recreate knowledge in expert domains. Using the structure of the human brain was and still is another method of creating AI algorithms. The first model was the Ising model (formulated in 1925) used to describe ferromagnetism. The model was created without using computers, but its mathematical properties are basically recurrent neural network (RNN). Artificial neural networks (ANNs) comprise a family of algorithms featuring interconnected nodes and creating extensive collections of matrices containing the training data of the model. The term machine learning (ML) often appears in connection with neural networks, but machine learning is its own discipline covering ways to store data in machines and access it by using queries (avoiding the term to teach computers, because learning also requires cognitive processes are not part of the ML algorithms).

The most recent incarnation of AI algorithms is language models, preferably with large data sets. Recreating human language is a part of artificial intelligence research. The data needed to perform is the most crucial ingredient. Without the preprocessed corpus of texts, images, or other kinds of data, the algorithm can’t do much. Selecting the data feed also controls what the algorithm can do. LLM algorithm cannot create new content. They remix and summaries data they already “know”. It is possible to collect data during conversation. Conversation simulators based on Markov chains can do this by analyzing input and extending their internal data structures. Again, this is not learning, it’s just changing data structures. The algorithm cannot think on its own.

Given the large amounts of input and the requirement to access and process the data that is part of a language model, there are several security implications. First, you cannot selectively remove data from the corpus once the algorithm has processed the input. This is true for neuronal networks and for language models. Current search engine and chat offers deploy security simply by maintaining block lists of “dangerous” sentences. This is like opening all ports on your firewall and blocking specific ports once you get a security report. There is research on the security of LLMs. A recent pre-publication suggests that large AI models cannot be deployed safely. Especially the poisoning of algorithms by feeding it false information can be a threat to anyone using the technology.

DeepSec 2023 wants to explore attacks of AI at the next conference in November. If you have done tests or research on AI algorithms in the wild, please send us a message.

Share this Post

About René Pfeiffer

System administrator, lecturer, hacker, security consultant, technical writer and DeepSec organisation team member. Has done some particle physics, too. Prefers encrypted messages for the sake of admiring the mathematical algorithms at work.