DeepSec 2023 Talk: Deepfake vs AI: How To Detect Deepfakes With Artificial Intelligence – Dr. Nicolas Müller

Sanna/ June 6, 2023/ Conference

Artificial intelligence is developing at a breathtaking pace, already surpassing humans in some areas. But with opportunity comes potential for abuse: generative models are getting better at creating deceptively real deepfakes – audio or video recordings of people that are not real, but entirely digitally created. While the technology can be used legitimately for film and television, it has great potential for abuse. This lecture illustrates this problem using audio deepfakes, i.e. fake voice recordings. The technical background of synthesis will be highlighted, and current research on countermeasures will be presented: Can we use AI to expose deepfakes? Can we learn to recognise deepfakes, and if so, how?

We asked Dr. Nicolas Müller a few questions about his talk.

Please tell us the top 5 facts about your talk.

  1. We will listen to Angela Merkel recite a poem and eat strawberries with Obama on the beach.
  2. We will learn how deepfakes are created.
  3. We will see results of a user study, where human participants compete against an AI in detecting deepfakes.
  4. You will be able to perform this experiment yourself after the talk.
  5. They talk will address both experts and laypeople.

How did you come up with it? Was there something like an initial spark that set your mind on creating this talk?

It’s my research topic.

Why do you think this is an important topic?

Because deepfakes change the information space completely, and it is important to be aware of the dangers as well as the possibilities.

Is there something you want everybody to know – some good advice for our readers, maybe?

We’ll try to give an unbiased look on both challenges and possibilities of deepfake technology.

A prediction for the future – what do you think will be the next innovations or future downfalls when it comes to your field of expertise / the topic of your talk in particular?

Deepfakes will become more available and more realistic, and will increasingly be (mis)used.

Dr Nicolas Müller studied mathematics, computer science and theology at the University of Freiburg, graduating with distinction in 2017. He completed his doctorate in Machine Learning at TU-Munich in 2022 on the topic of ‘Security of Machine Learning Training Data’. He has been a researcher at Fraunhofer AISEC since 2017. He has been a research associate at Fraunhofer AISEC in the department ‘Cognitive Security Technologies’ since 2017. His research focuses on the reliability of AI models, ML shortcuts and audio deepfakes.

Share this Post