Press Release: IT World in AI Mania

Sanna/ February 16, 2023/ Development, Legal, Press, Security

Artificial intelligence (AI) is on everyone’s lips, but its results fall short of all expectations.
Picture of a brain drawn in the fashion of a labyrinth. Drawing by Florian Stocker.Wouldn’t it be nice if computers could effortlessly give meaningful results to all kinds of questions from all kinds of unstructured data collections? Periodically, algorithms that do incredible things are celebrated in information technology. At the moment, it is the turn of artificial intelligence algorithms. Search engines are retrofitting AI. But the supposed product is far from real cognitive performance. Many open questions remain.

History of Algorithms

The first experts to work with algorithms to emulate human thought processes came from the fields of mathematics and philosophy. They wanted to formalise analytical thinking from the subfield of logic and describe it in models. In the 1950s, the algorithms were implemented on the computers that were emerging at the time. For lack of memory and computing power, complex implementations had to wait for modern hardware, which only recently has been able to calculate complexity in tolerable time spans. Frustration with the discrepancy between expectations and actual implementation led to the so-called “AI Winter” in the 1980s, when budgets for projects in this field were slashed. Reading today’s headlines, all problems seem to be solved and the algorithms’ answers suggest accuracy in all situations. This is not the case. High expectations are created, and a dream determines the path.

Speech Simulations are not artificial Intelligence

The Generative Pre-trained Transformer (GPT) family of algorithms has made for a lot of leapfrogging news. The GPT-1, GPT-2 and GPT-3 versions are language models that mimic human communication. At the core of the models are, on the one hand, the underlying mathematical algorithms and the text templates used for training. The versions differ in the number of parameters (GPT-3 has 175 billion variables) and the corpus of training content, which has steadily increased in the GPT versions. ChatGPT is currently being celebrated because it can phrase sentences very well. However, the answers it provides are generated with no cognition, i.e. the code does not think when generating an answer. It is merely a model for generating fluently readable text. This is not to belittle the effort behind the model, but it is important to know that ChatGPT cannot generate a text derived from its own thoughts. It relies on the content of the “learned” templates, which also ultimately determine the direction of the responses. The algorithm cannot provide statements or facts that are not contained in the learned text corpus.

The term artificial intelligence (AI) is therefore misleading. It is a field of research in mathematics and the definition and measurement of natural intelligence in humans is not clearly defined. Tests and metrics exist, yet the property of intelligence manifests itself in different representations. Applying the fuzzy template of assessing intelligence in humans to computers now contains great fuzziness by definition. Artificial intelligence algorithms, for example, should therefore actually be tested with problems from philosophy, and not just with memorised formulas, picture templates or telephone directories, to put it bluntly. Apart from that, many products that advertise themselves as artificial intelligence contain quite normal, albeit complex, statistical analyses (known as machine learning) that can also deliver good evaluations.

Magic Solutions for Information Security

With the background described above, security products for information security are occasionally offered that adorn themselves with the title “AI”. This is often used to promote a collection of methods for the automatic evaluation of security-relevant parameters, which uses a combination of learned data sets and predefined analyses to make an assessment of right and wrong. Automated evaluation of a lot of data is a strength of AI and machine learning algorithms. They can also be used to analyse access patterns, network traffic, or as a basis for deeper investigation. It is important to never forget that the applications do not perform any cognitive work in the analysis. They either replicate something they have learned during the training phase or react to patterns that have been recognised. The context escapes the algorithm if it is not explicitly supplied as a rule set during input.

The special issues in information security thrive on cross-connections and contexts. In most cases, the causes of security incidents are in the interconnections between different sub-areas. In certain situations, these are easier to recognise for humans than for machines. Therefore, one should never be lulled into a sense of security. Even with a success factor of 90%, 99% or 99.9%, there is still enough room for successful attacks.

AI and Search Engines are the last Customers for Big Data

The core feature of AI algorithms is the data used for training. The effort for this is considerable. GPT-3 had as input the results of the Common Crawl project, which queries public websites. The amount is difficult to describe because the texts are reduced to so-called tokens during the learning process, which are then used in the language model. In addition, selected texts from the web, the complete Wikipedia and two selections of books published on the internet were used for the learning phase. GPT’s language model is thus limited to these templates. The sheer amount of data is, of course, too large to notice this limitation in a few conversations. The euphoria over ChatGPT shows this very clearly. Large amounts of data also show that language models like GPT-3 and especially their successors can only be run with the appropriate resources of massive computing power and memory. These algorithms are therefore a very attractive means of promoting cloud platforms. At the other end, nothing happens without a data connection to implement the algorithms and their training data.

Legal Consequences of Algorithms

According to a survey of companies in the financial sector by the World Economic Forum, 85% of companies use algorithms from the field of artificial intelligence and machine learning in at least one area of application. The report, published in January 2020, represents a sentiment survey. It begs the question of what the companies surveyed plan to do with the results of the algorithms. In certain areas, the machine-generated statement is probably passed on directly and decides on enquiries from customers. With IT security products, systems then automatically decide if a digital attack has occurred. If everything has been decided correctly, then there is no critical questioning. But what happens to the wrong decisions? In the British comedy series Little Britain from 2004, the later winged word “computer says no” was used in a sketch. When an algorithm’s decision has serious consequences, the question of responsibility arises. Internet search engines have long been used for medical self-diagnosis, for example. Now there is the possibility of doing the same through a dialogue with a conversation simulator. The consequences in case of mistakes could be severe. Advances in computerised speech models must not be allowed to delegate responsibility for potentially false statements to nowhere.

The DeepSec conference team is organising a one-day conference on 19 April 2023 on legal issues and their connection to IT security and information technology in general. Tickets can be bought at our ticket shop. The full schedule will be available in a few weeks.

 

Share this Post