DisinfoCon 2024 – Taking stock of Information Integrity in the Age of AI

Co-organised by Democracy Reporting International, Forum Transregionale Studien, 
Berliner Landeszentrale für politische Bildung and Verfassungsblog.

Thursday 20 February 2025
Revaler Str. 29, 10245 Berlin

18:30 – 20:00

Democracy Reporting International held it’s third annual hybrid conference on September 26, 2024. DisinfoCon is a forum for civil society, policymakers, journalists and AI practitioners to meet, exchange and align on a values-based approach to the AI revolution affecting our democracies. 

Disinformation poses a significant threat to our lives and the foundations of democracy, especially in this pivotal election year. The spread of false information can undermine public trust, manipulate voter behavior, and distort democratic processes. With the rise of social media and advanced technologies like AI, disinformation campaigns have become more sophisticated, making it harder for people to discern truth from falsehood.  

This year, as we faced numerous elections globally, the impact of disinformation is more critical than ever. But what predictions have come true? What new, unforeseen challenges have emerged? These are the questions we were looking to answer at DisinfoCon. 

Watch the conference where!

After DisinfoCon 2022  & 2023, this year we continued the conversation that we began three years ago about disinformation. Yet, after generative artificial technologies becoming available to consumers around the world and playing a role in our daily lives; this year we built on the insights from our previous editions, exploring the complexities of the evolving AI landscape, its effects on democracy, and the new challenges that have surfaced during this ‘super year’ of elections.

Access the full programme here

Keynote Speech: Presentation by Carl Miller 

To kick off the event, author and founder of the Center for Analysis of Social Media at Demos Carl Miller gave a presentation on the role of artificial intelligence in influence operations. Mr. Miller stressed that AI is being used both by researchers and governments to track trends in social media at the same time as malicious actors, with new research methods such as semantic analysis offering insight into where and when impersonation is occurring online. Attention, he argued, is the number one goal of influence operations, which operate not just by lying, but also flattering, confirming biases, and appealing to ingroup identities. To truly combat influence operations, governments must target the commercial and criminal infrastructure that support these campaigns and make them as costly as possible. 

Beyond engagement: algorithmic design and new strategies to countering harmful content online    

Next up, DRI’s Ognjan Denkovski moderated a panel discussion with Paula Gori, Secretary-General and Coordinator of EDMO, Felix Kartte, Mercator Senior Fellow, Hallie Stern, American Democracy & Technology Policy Translation Fellow Integrity Institute member, McCain Institute, and Jonathan Stray, Senior Scientist at Center for Human Compatible AI UC Berkely. Together, they discussed the democratic risks of social media companies relying primarily on engagement-based algorithms, which too often prioritise sensational content over truth. Mr. Stray began the conversation by explaining why engagement-based algorithms are so widespread, and the challenges of selecting an alternative model. Mr. Kartte and Ms. Gori further elaborated on the requirements the DSA imposes on algorithms, such as giving users greater knowledge and control over the content they are recommended and providing civil society with greater data access. Bridging-based ranking, where algorithms select content from across the variety of opinions on an issue, was discussed as a way through which online polarisation could be reduced. Ms. Stern stressed that any transition away from our current models would be heavily opposed by large social media companies, as engagement remains highly profitable.  

Auditing generative AI models: identifying and mitigating risks to democratic discourse   

In the following panel, DRI’s Francesca Giannaccini interviewed Brando Benifei, Member of the European Parliament, Oliver Marsh, Head of Tech Research at AlgorithmWatch, and Lucie-Aimée Kaffee, EU AI Policy Lead & Applied Researcher at Hugging Face. Mr. Benifei began by explaining the process of getting the EU’s AI Act passed, and the significance of such a landmark piece of legislation. Mr. Marsh stressed that much of the act’s importance will rely on its implementation, which has yet to be seen. He also highlighted his concerns about the risks of AI, in particular the abuses that occur in training and the environmental cost in energy. Dr. Kaffee described how generative AI is often treated as a “solution without a problem” and the role of open-source AI models in setting standards for transparent documentation. What transparency even means was a topic of some debate: system cards and model cards are helpful as a first step, but there also needs to be room for people to make specific inquiries into how models work on certain topics and the decision-making processes behind model developments.  

Future-proofing digital Europe: DSA and AI Act synergies    

To discuss where the EU’s Digital Services Act and AI Act intersect, DRI’s Daniela Alvarado Rincón moderated a panel with Daniel Holznagel, a judge at Kammergericht Berlin, Paddy Leerssen, Postdoctoral Researcher at University of Amsterdam, and Bianca-Ioana Marcu, Global Privacy Policy Manager at the Future of Privacy Forum (FPF). Ms. Marcu began by explaining how new these new regulations require more detailed information about general purpose AI models (GPAI), as well as what the EU AI office defines as “systemic risks” in such models. Mr. Holznagel went on to clarify where search engines and LLM-powered chatbots fall into existing law, and possible blind spots in the DSA. The integration of AI systems into existing products like search engines muddies the legal water, according to Mr. Leerssen, as platforms may no longer be considered mere hosts of content, but also the producers of the content their AI systems generate. Overlaps between the DSA, AI Act, and the GDPR were also addressed, and outstanding questions were raised such as whether or not LLMs can be viewed as processors of personal and sensitive data. Finally, while the DSA may require greater transparency on how recommender algorithms work, it is increasingly difficult for researchers and journalists to track the outputs of those algorithms, ie have meaningful access to social media data.  

Disinformation and the US 2024 presidential elections: risks and vulnerabilities    

In the final panel of the day, Digital Democracy research associate Duncan Allen hosted a conversation on disinformation in the upcoming US elections with Brandi Geurkink, Executive Director at the Coalition for Independent Tech Research, Alex Sängerlaub, Founder & Director at futur eins, and Melinda Crane, independent journalist and former DW TV correspondent. Dr. Crane compared disinformation to a “parasite that looks for a useful host – the greater the polarisation of a community, the better host it will make.” Panelists spoke about why certain narratives, such as the claim by President Trump that Haitian immigrants are “eating the pets” of Ohio residents, spread so rapidly and widely despite no evidence. Ms. Geurkink spoke on the increasingly difficult task of civil society and fact checkers at countering these narratives, and the Republican party’s ongoing effort to cut off transparency and accountability for election disinformation. Mr. Sängerlaub explained how social and news media bubbles prey upon those with low news media literacy, and create separate “bespoke realities”. Also discussed was the role of domestic communication channels in promoting foreign disinformation, and the role of X, formerly Twitter, in creating a flourishing market for sensationalist disinformation. 

26.09.2024
online
9:30 CEST

Supported by

Related posts