DisinfoCon – A values-based approach to the AI revolution

Co-organised by Democracy Reporting International, Forum Transregionale Studien, 
Berliner Landeszentrale für politische Bildung and Verfassungsblog.

Thursday 20 February 2025
Revaler Str. 29, 10245 Berlin

18:30 – 20:00

In partnership with the German Federal Foreign Office, Democracy Reporting International invites you to our second annual hybrid conference. DisinfoCon is a forum for civil society, policymakers, journalists and AI practitioners to meet, exchange and align on a values-based approach to the AI revolution affecting our democracies. 

In just a few months, the AI revolution has captivated the globe. We need robust regulatory frameworks, mitigation mechanisms, and strong democratic values to ensure that we can navigate these new technological waters while maintaining the integrity of our democratic institutions and governance. Our network of global experts will convene to discuss the variety of new technological developments and threats that pose a threat to democracy and mobilise to address them effectively as a counter-disinformation community. 

https://youtube.com/watch?v=aqellVtQb8k%3Fsi%3Drb7Eu79N8b80lLXB%26start%3D2704

To learn more about our DisinfoCon and see an updated agenda, access our site disinforadar.com

The panels featured a diverse array of experts who provided valuable insights into the evolving landscape of disinformation and its impact on democracy. Here are the main takeaways from each:
Interview with Dr. Peter PtassekDr. Peter Ptassek, the Director for Strategic Communication and Public Diplomacy at the German Federal Foreign Office, explained how German policy towards disinformation has evolved in the wake of the Russian invasion of Ukraine. While previously emphasising a need for reactive debunking, Germany is now “discovering the importance of active communication” and taking advantage of its network of embassies abroad to push a pro-democratic narrative. Dr. Ptassek outlined the rationale behind this “show don’t tell” approach and defined the potential challenges Germany faces addressing disinformation in the near future.Keynote speech: The new algorithm of foreign affairsNext up, Danish Ministry of Foreign Affairs Tech Ambassador Anne Marie Engtoft Meldgaard explained the significance of the current moment in AI and the disinformation risks currently facing democracies. Meldgaard emphasised the particular dangers generative AI poses for governments that wish to address disinformation, as the technology can increasingly create highly convincing false content. She also stressed the importance of regulation that is agile enough to adapt to the rapid pace of technological change.
Will generative AI be the end of democracy?Dr. Renana Keydar, Senior Lecturer at the Center for Digital Humanities at the Hebrew University of Jerusalem, engaged in a debate with Pegah Maham, Project Director Artificial Intelligence & Data Science at the Stiftung Neue Verantwortung.Dr. Keydar argued that generative AI can be beneficial to democratic processes by providing people with more information about political candidates and parties, breaking down complex policy issues into more digestible content, translating information into different languages, and identifying media bias and providing balanced and unbiased information.Ms. Maham, on the other hand, argued that the risks of generative AI outweigh the benefits. She stressed that governments and civil society alike should be concerned about the “incentive landscape” when considering the applications of AI in democratic processes, and that many political parties and politicians worldwide will have little to no qualms about employing AI generated-disinformation if it helps them score cheap political points or secure an electoral win.Problems and solutions for inclusive AI practicesLuísa Franco Machado, Data Economy and AI Ethics Advisor at GIZ, spoke about the dangerous potential of generative AI to amplify stereotypes about race and gender. In her presentation, Ms. Machado highlights the patriarchal and sometimes racist nature of AI systems and language models, as well as the centering of male voices in the debate around the future of AI. Mutale Nkonde, Founding CEO of AI for the People, then spoke about the dangers of training AI models off of potentially racist data that excludes the experiences, cultures, and histories of marginalised groups. In her speech, she emphasised the need to develop national approaches and global standards for AI that focus on human and civil rights, as well as ensure diverse teams are behind the design and training of AI systems.
Regulating AI to safeguard democracy: Where to start?In the next panel, DRI’s Dr. Jan Nicola Meyer moderated a panel on the AI regulation landscape with Prof. Dr. Joanna Bryson of the Hertie School of Governance and Craig Matasick of Organization for Economic Cooperation and Development (OECD). Mr. Matasick spoke on the importance of regulation for safeguarding democracy from the more dangerous aspects of artificial intelligence, while Dr. Bryson spoke on the relationship between government regulation and private innovation. The speakers also addressed the efforts by authoritarian countries such as China to reign in AI, and what lessons democratic countries can learn from such attempts.The hidden costs of the rise of AIOdanga Madung, Senior Researcher at the Mozilla Foundation and Richard Mathenge, Administrator of the African Content Moderators Union spoke on the abusive and extractive practices of tech companies in the realm of AI, specifically in the curation of training data. Mr. Mathenge recounted his and his colleagues’ traumatic experience classifying hateful, violent, and abusive content in Kenya on behalf of OpenAI. Their outsourcing company Sama. Mr. Madung described the actions of these companies as “digital colonialism”, where the benefits of new technologies are imported to Western countries while the harms are exported abroad.
Exploring AI detection and potentialIn this panel, Kate Brightwell, Head of Strategy & Engagement at Adobe’s Content Authenticity Initiative, and Claire Leibowicz, Head of the AI and Media Integrity Program at the Partnership on AI, spoke with the audience on recent advancements in technical solutions to detecting AI-generated content. Ms. Brightwell emphasised the real harm synthetic disinformation can have on the world, and the need for tech companies to coalesce around certain standards of verification, such as content credentials. Ms. Leibowicz concurred, stressing that such efforts require greater transparency and collaboration with experts across fields and companies.The DSA in actionIn the final panel of the conference, DRI’s Richard Kuchta interviewed Alberto Rabbachin of the European Commission, Katarína Klingová, Senior Research Fellow at GLOBSEC, and Jakub Rybnikár of the Council for Media Services, on the impact of the DSA in the upcoming Slovak elections. The speakers spoke on the specific challenges facing the Slovak online information space, such as the low number of Slovak-language content moderators on popular media platforms, as well as the frustrations civil society organisations have faced in trying to hold large tech companies accountable for the content on their sites.
In a nutshell, this year’s event was a remarkable gathering of minds dedicated to tackling the challenges of disinformation in our digital age. #DisinfoCon23 provided a platform for meaningful discussions and collaborations, reaffirming our commitment to safeguarding democracy in the face of disinformation.

Thank you to all our esteemed panelists and attendees for your invaluable contributions. Together, we continue to work towards a more informed and resilient digital world – meanwhile, share your thoughts with us using the hashtags #DisinfoCon23 and #DisruptDisinfo!  

Supported by

Related posts