Meet the Future of AI: Generative AI and Democracy

On 19 June 2024 a significant milestone was reached as six European-funded projects focused on AI and disinformation (AI4Media, Titan, veraAI, AI4TrustAI4Debunk and AI-CODE), alongside the European Commission, hosted the event "Meet the Future of AI - Generative AI and Democracy" in Brussels. The gathering brought together approximately 60 participants in person to explore the critical interplay between AI and democratic processes, presenting innovative AI-powered solutions to counter disinformation and fostering insightful discussions. Below are the highlights and key takeaways from the sessions.

Opening Remarks and Event Overview

Krisztina Stump, Head of the Unit in charge of combatting online disinformation at the European Commission (EC), opened the event by emphasising the EC's commitment to tackling disinformation and preventing the misuse of deepfakes to safeguard democracies. 

She noted that disinformation has been identified as the top societal threat by the World Economic Forum, with a significantly high level of concern among populations as shown by Eurobarometer and Ipsos surveys. 

Although recent European elections saw no major AI incidents, the risk remains, as evidenced in Slovakia and India. 

Stump particularly highlighted the dual role of AI as both a threat and a tool in fighting disinformation. To this end, the European Commission has invested €28 million in Horizon Europe projects for AI solutions, with regulatory frameworks like the Digital Services Act (DSA), the Code of Practice on Disinformation, and the AI Act aiming to mitigate these risks. 

Finally, she stressed the importance of technology and regulation working together to address the challenges posed by AI.

Krisztina Stump during her opening address
Krisztina Stump during her opening addressEvent organising team

Panel 1: Threats and Opportunities of Generative AI for Mis- and Disinformation

Moderated by Symeon (Akis) Papadopoulos, Principal Researcher at CERTH-ITI and coordinator of the veraAI project, the first panel explored the dual nature of AI. 

Denis Teyssou from Agence France-Presse (AFP) did a demonstration that showcased several recent examples of deepfakes and other multimodal manipulations. He showed that these manipulations can be effectively debunked using AI-powered tools integrated into the so-called verification plugin, supported by the veraAI project (and started off in EC co-funded projects InVID and WewVerify). His presentation highlighted the practical application of these tools in combating disinformation.

Similarly, Aqsa Farooq from the University of Amsterdam discussed the potential influence of AI-enabled operations on the EP24 elections. She focused on the perspectives of citizens in the Netherlands, Germany, and Poland regarding the threat of AI-generated disinformation throughout the election cycle. More precisely, she noted that Poland showed the highest level of concern and activity in identifying and sharing false information.

On the other hand, Samuel Power from the University of Sussex highlighted the benefits of using AI to improve election oversight. His team has developed natural language processing techniques to categorise and classify election spending and donation data, with the aim of creating regulatory applications for the UK Electoral Commission and other election management bodies around the world. 

Turning to corporate efforts, Microsoft's Tjade Stroband detailed the company's initiatives to safeguard democratic processes, including the company's preparations for the 2024 elections and collaboration with industry peers to combat deceptive AI use. More precisely, Stroband emphasised Microsoft's targeted voter tools, such as redirecting Copilot and Bing searches to official EU information, the 'Check. Recheck. Vote.' public awareness campaign, and their rapid response system for reporting deepfakes.

Max van Drunen from the AI, Media, and Democracy Lab at the University of Amsterdam concluded the panel by discussing the regulatory landscape concerning AI during the recent EU elections. He focused on the obligations of platforms under the Digital Services Act, emphasising the need for transparency, reliable information, and monitoring of AI-generated content. Van Drunen also discussed the evolving legal landscape, including upcoming regulations on political ads and the AI Act's mandate for disclosing deepfakes and AI-influenced content, highlighting the importance of labelling and limiting the spread of misleading information.

Panel 1Event organising team

Panel 2: Public Expectations and AI in News and Politics

The second panel, moderated by Gina Neff, Executive Director of the Minderoo Centre for Technology & Democracy at the University of Cambridge, shifted focus to public perceptions and trust in generative AI. 

Rasmus Kleis Nielsen from the Reuters Institute for the Study of Journalism initiated the discussion by examining public expectations of AI's role in news and politics. He highlighted the significant variability in trust across sectors and stressed the necessity for transparency to maintain credibility. Nielsen pointed out that while about a third of people have used generative AI tools, their impact and trustworthiness vary by sector, with healthcare being more trusted than social media or news production for instance.

Building on this, Riccardo Gallotti from Fondazione Bruno Kessler (FBK) presented research on the persuasive capabilities of large language models. His study found that participants debating against GPT-4, with access to personal information, were more likely to agree with their AI opponents than with human ones. This underscores the significant impact of personalisation on AI-driven persuasion. 

Similarly, Eliška Pírková from Access Now discussed the lesser-than-anticipated role of AI-generated disinformation in the EU elections, emphasising that the main issue lies in its dissemination through online advertising. She also stressed the importance of balancing the fight against deepfakes with protecting freedom of expression. Eliška also touched upon the risks posed by the latest AI systems and potential mitigation strategies, pointing to the importance of robust enforcement to counter the profit-driven spread of AI-generated disinformation. Finally, she pointed to a study and collaboration between Access Now, Algorithm Watch and AI Forensics, focussing on watermarking AI-generated content to ensure transparency and accountability. Results will also be made available in a paper. 

Finally, Francesco Saverio Nucci, the TITAN project coordinator, explored the potential of generative AI to enhance citizens' critical thinking in his presentation. He emphasised that disinformation aims to undermine critical thinking, which is essential for media literacy. Nucci proposed the use of personalised large language model (LLM) chatbot coaches that engage users in Socratic dialogue, posing simple yet insightful questions to help identify signs of conspiratorial thinking or disinformation. This approach, in which AI agents guide users to critically evaluate the information they encounter, can empower citizens to better spot and counter disinformation.

Many discussion ahead of and during the eventEvent organising team

Conclusion

Peter Friess from the European Commission’s DG CONNECT concluded the conference by highlighting the importance of bridging knowledge and competencies among stakeholders both large and small. Friess elaborated on the challenges ahead and expressed hope for a collective statement on dealing with generative AI and disinformation, outlining the need for clear actionable steps to use AI in fighting disinformation. 

The long but rewarding day ended with an informal networking session, also providing an opportunity for participants to discuss insights and future collaborations over refreshments.

Authors: Heini Järvinen & Inès Gentil (EU DisinfoLab)

Editor: Jochen Spangenberg (DW)
 

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.