DeepSec 2024 Talk: The Tyrant’s Toolbox – Julian & Pavle B.
Social media, and our communications systems, have devoured any semblance of privacy, putting the eyes and ears of authoritarian and wannabe fascist types into the pockets of each of us; radically erasing whatever distance once existed between those who exercise authority and the human objects of their control, both at home and abroad. As Professor Ronald J. Deibert, founder of Citizen Lab, eloquently highlights in his book “Reset: Reclaiming the Internet for Civil Society”: “…recent years have brought about a disturbing descent into authoritarianism, fueled by and in turn driving income inequality in grotesque proportions the rise of a kind of transnational gangster economy.”
As we continue our descent into a global madness fueled by AI, spyware, algorithms, and misinformation, tyrants around the world continue to expand their toolbox. Through our talk, we examine their weapons through which they maintain their stranglehold on power, whether in an authoritarian regime or an illiberal democracy. Building upon our paper, “Eyes in The Skies: A Study of Spyware’s Usage by Authoritarian and Illiberal Regimes,” we expand our scope to better understand what technology is enabling these frightening trends. Our aim is to contribute to the awareness of these trends and to empower those who wish to combat them.
We asked Julian and Pavle a few more questions about their talk.
Please tell us the top 5 facts about your talk
- Surveillance capitalism is an increasingly lucrative sector of the economy.
- Surveillance and data aggregation technologies are becoming normalized, increasingly seeping through various sectors of life such as public administration and policing.
- Initially popular among authoritarian regimes, surveillance and data aggregation technologies have been adopted by liberal democracies as well, their use and intrusiveness increasing with social tensions, political polarization, and an uncertain geopolitical landscape.
- Laws and regulations are not ready for technologies like AI, threatening civil and human rights from zones of war to municipal administrations.
- A new type of citizenship is needed to face these challenges in the digital era.
How did you come up with it? Was there something like an initial spark that set your mind on creating this talk?
We have jointly been observing the rise of artificial intelligence for the past three years. Not only was it clear that civil authorities lacked the legal tools to prevent its usage for harmful purposes, but its growing deployment on the battlefield made us question: should it be this straightforward? Tech’s increased presence in military arsenals showed us how formidable of an asset it is and raised more questions than it did answers in terms of accountability and its ethical usage while ensuring respect of human rights. Within liberal democratic states, AI and data aggregation technologies are increasingly being used to administer populations and ensure security on territories. However, this comes at the cost of a loss of accountability and haphazard and intrusive data collection methods.
We feel that the conversation about what protections and safeguards there are to preserve citizens’ rights and ensure accountability is timely and needed. Many do not know that these technologies are being used and at what cost; this is precisely what we seek to speak on.
Why do you think this is an important topic?
We believe that our discussion of AI and data aggregation technologies is essential because their use is reshaping society at a breakneck pace. This use raises serious concerns on accountability, ethics, and human rights and routinely outpace existing legal frameworks. In turn, this opens the way for their irresponsible, insecure, and unjustified use, particularly in military and surveillance contexts, given that oversight is, by definition, insufficient. The extensive data collection practices required for AI can also violate privacy, especially within liberal democracies, where such technologies are increasingly used to administer populations. Moreover, the growing use of AI in warfare introduces the possibility of autonomous weapons and decisions being made without human control, amplifying concerns about global security. This makes it essential to discuss how we can regulate these technologies, ensure transparency, and safeguard fundamental rights before their unchecked growth leads to unintended consequences.
Is there something you want everybody to know – some good advice for our readers maybe?
If you value a free and open internet, this talk is a must-attend. The rise of artificial intelligence (AI) and data-driven technologies poses genuine threats to privacy, freedom, and accountability online. As AI becomes more embedded in our digital spaces, the risks of intrusive surveillance, unchecked data collection, and ethical violations are growing. Join us to learn about the hidden dangers and how we can fight to protect our digital freedoms, safeguard privacy, and demand accountability in the age of AI. We need your voice and actions to keep the internet free for all!
A prediction for the future – what do you think will be the next innovations or future downfalls with your field of expertise/the topic of your talk in particular?
In the future, the tension between the benefits and risks of artificial intelligence (AI) and data technologies will intensify. As these systems become more advanced, governments and corporations may increasingly rely on AI to manage public life, enforce security, and even wage wars. This could lead to a future where privacy faces severe compromise, with mass surveillance and data collection becoming the norm. The misuse of AI in military contexts could also introduce new global security threats, including autonomous weapons acting beyond human control. On the other hand, if society rises to the challenge, we could see the development of robust frameworks that balance technological innovation with protecting human rights and freedoms. The future will probably depend on how quickly we act to implement safeguards and ensure that AI remains a tool that empowers, rather than restricts, individuals. The choices made in the coming years will shape whether the internet remains a free and open space or becomes a heavily regulated and monitored environment.
Penetration tester by day, Julian identifies vulnerabilities to exploit for a wide range of clients. OSINT enthusiast by night, Julian follows emerging threats to the Western world.
A social scientist by trade, Pavle is a risk analyst specializing in risk assessment, audit, and privacy. Outside of his consulting career, Pavle is a researcher, his areas of interest sitting at the intersection of surveillance technologies, civil rights, and global affairs.