Moderated by Symeon (Akis) Papadopoulos, Principal Researcher at CERTH-ITI and coordinator of the veraAI project, the first panel explored the dual nature of AI.
Denis Teyssou from Agence France-Presse (AFP) did a demonstration that showcased several recent examples of deepfakes and other multimodal manipulations. He showed that these manipulations can be effectively debunked using AI-powered tools integrated into the so-called verification plugin, supported by the veraAI project (and started off in EC co-funded projects InVID and WewVerify). His presentation highlighted the practical application of these tools in combating disinformation.
Similarly, Aqsa Farooq from the University of Amsterdam discussed the potential influence of AI-enabled operations on the EP24 elections. She focused on the perspectives of citizens in the Netherlands, Germany, and Poland regarding the threat of AI-generated disinformation throughout the election cycle. More precisely, she noted that Poland showed the highest level of concern and activity in identifying and sharing false information.
On the other hand, Samuel Power from the University of Sussex highlighted the benefits of using AI to improve election oversight. His team has developed natural language processing techniques to categorise and classify election spending and donation data, with the aim of creating regulatory applications for the UK Electoral Commission and other election management bodies around the world.
Turning to corporate efforts, Microsoft's Tjade Stroband detailed the company's initiatives to safeguard democratic processes, including the company's preparations for the 2024 elections and collaboration with industry peers to combat deceptive AI use. More precisely, Stroband emphasised Microsoft's targeted voter tools, such as redirecting Copilot and Bing searches to official EU information, the 'Check. Recheck. Vote.' public awareness campaign, and their rapid response system for reporting deepfakes.
Max van Drunen from the AI, Media, and Democracy Lab at the University of Amsterdam concluded the panel by discussing the regulatory landscape concerning AI during the recent EU elections. He focused on the obligations of platforms under the Digital Services Act, emphasising the need for transparency, reliable information, and monitoring of AI-generated content. Van Drunen also discussed the evolving legal landscape, including upcoming regulations on political ads and the AI Act's mandate for disclosing deepfakes and AI-influenced content, highlighting the importance of labelling and limiting the spread of misleading information.
The second panel, moderated by Gina Neff, Executive Director of the Minderoo Centre for Technology & Democracy at the University of Cambridge, shifted focus to public perceptions and trust in generative AI.
Rasmus Kleis Nielsen from the Reuters Institute for the Study of Journalism initiated the discussion by examining public expectations of AI's role in news and politics. He highlighted the significant variability in trust across sectors and stressed the necessity for transparency to maintain credibility. Nielsen pointed out that while about a third of people have used generative AI tools, their impact and trustworthiness vary by sector, with healthcare being more trusted than social media or news production for instance.
Building on this, Riccardo Gallotti from Fondazione Bruno Kessler (FBK) presented research on the persuasive capabilities of large language models. His study found that participants debating against GPT-4, with access to personal information, were more likely to agree with their AI opponents than with human ones. This underscores the significant impact of personalisation on AI-driven persuasion.
Similarly, Eliška Pírková from Access Now discussed the lesser-than-anticipated role of AI-generated disinformation in the EU elections, emphasising that the main issue lies in its dissemination through online advertising. She also stressed the importance of balancing the fight against deepfakes with protecting freedom of expression. Eliška also touched upon the risks posed by the latest AI systems and potential mitigation strategies, pointing to the importance of robust enforcement to counter the profit-driven spread of AI-generated disinformation. Finally, she pointed to a study and collaboration between Access Now, Algorithm Watch and AI Forensics, focussing on watermarking AI-generated content to ensure transparency and accountability. Results will also be made available in a paper.
Finally, Francesco Saverio Nucci, the TITAN project coordinator, explored the potential of generative AI to enhance citizens' critical thinking in his presentation. He emphasised that disinformation aims to undermine critical thinking, which is essential for media literacy. Nucci proposed the use of personalised large language model (LLM) chatbot coaches that engage users in Socratic dialogue, posing simple yet insightful questions to help identify signs of conspiratorial thinking or disinformation. This approach, in which AI agents guide users to critically evaluate the information they encounter, can empower citizens to better spot and counter disinformation.
Peter Friess from the European Commission’s DG CONNECT concluded the conference by highlighting the importance of bridging knowledge and competencies among stakeholders both large and small. Friess elaborated on the challenges ahead and expressed hope for a collective statement on dealing with generative AI and disinformation, outlining the need for clear actionable steps to use AI in fighting disinformation.
The long but rewarding day ended with an informal networking session, also providing an opportunity for participants to discuss insights and future collaborations over refreshments.
Authors: Heini Järvinen & Inès Gentil (EU DisinfoLab)
Editor: Jochen Spangenberg (DW)