Generative AI and Democracy: Drawing Conclusions   

Generative AI technology continues to evolve at a rapid pace, with new Large Language Models (LLMs) appearing on a regular (monthly or even weekly) basis. Hyper-realistic AI-generated and manipulated media including image, audio, and video are becoming widely accessible through a variety of commercial and open-source tools. Off-the-shelf software like browsers and productivity applications now integrate generative AI tools.

The second edition of ‘Meet the Future of AI’, held in Brussels on 19 June 2024, addressed 

  • the use of AI for and against disinformation which in 2024 remains a pressing concern, and tops lists of short-term risks of AI, such as the World Economic Forum Global Risks report
  • its perception by media professionals and citizens, 
  • the relevant regulatory landscape, 
  • and the huge potential of generative AI as a basis for fighting disinformation.

The first edition of the event in 2023 (read summary) mapped the domain and identified key challenges. These meetings result from the collaboration of six European projects, including TITAN, AI4Trust, AI4Media, AI4Debunk, AI-CODE and veraAI.

Many discussion ahead of and during the eventEvent organising team

Numerous developments over the last year have increased the risks of AI. This is one of the most election-dense years in recent history, and the fact that election periods offer fertile ground for disinformation campaigns raise questions around the potential role of generative AI as a tool for voter manipulation. Several national elections have or are taking place in 2024, including European Parliament and US Presidential elections. 

It is also the year in which the Digital Services Act (DSA) entered into force, and the highly debated AI Act took its final shape and was approved by the EU. Several questions arise with respect to whether these regulations are appropriate and sufficient to mitigate risks arising from the wide deployment of AI in an increasingly digitalised society.

Our event speakers and participants brought different perspectives and learned lessons to the table, all in an effort to make sense of this complex and fast evolving landscape. A summary of the talks is available on this veraAI blog post

Out of these numerous and diverse talks and discussions, some general conclusions can be drawn: 

Is AI generated disinformation already mainstream? 

While there was a marked increase of disinformation cases involving synthetic images, deepfake videos and voice cloning compared to previous elections, the volume and impact of AI generated disinformation in recent elections was less than anticipated (or feared). This may be attributed to a number of factors including the continuous and effective use of non-AI disinformation tactics (e.g. ‘cheapfakes’), the fast debunking efforts by fact-checking organizations on cases that emerged, and the efforts by digital platforms to detect AI generated media and prevent their spread, among others.

How do citizens perceive generative AI and AI generated disinformation?

Disinformation researchers and experts allude to a digital society where generative AI technologies are well understood and ubiquitously used. In reality, recent surveys among citizens indicate that a relatively small part of citizens are aware and appreciate the capabilities of modern generative AI tools, and the large majority of these people are only familiar with ChatGPT. Also, there are important differences across countries with respect to the perceived level of risk of AI-generated disinformation, and across sectors with respect to citizens’ trust on the positive use of generative AI technologies. 

What is the current maturity of technical solutions against AI generated disinformation? 

A wide range of technologies and tools are currently developed to counter (generative AI) disinformation. These range from open-source tools such as the Fake News Debunker (aka the verification plugin developed by EC co-funded projects InVID, WeVerify and veraAI) to proprietary tools and technologies by big tech companies, and include capabilities such as synthetic media detection, keyframe extraction, reverse image and video search, and content watermarking, among others. Experts using these tools have reported that even though they are an essential line of defense against disinformation, they are still facing several issues, including lack of reliability or challenges with trusting their outputs (e.g. lack of explanations). It was recognized that there is no single solution that can act as a ‘silver bullet’ and that this is a constant battle between new generative AI risks and new defensive mechanisms.   

Is there sufficient regulation in place to address the challenge? 

European regulation is advancing quickly and tries to follow the rapid developments in the field. Provisions in the DSA and AI Act appear to be well designed and could potentially be a valuable tool for authorities to address several of the risks arising as a result of generative AI technologies. However, their application into national contexts and their enforcement seems to be daunting and requires comprehensive resources, processes and tools to be successful. Therefore, instead of new regulation, focus should now be placed on assessing the existing regulation and ensuring its successful implementation. 

Can generative AI be leveraged in other creative ways to counter disinformation? 

Beyond using AI for disinformation detection, which is recognized as an essential need, there are opportunities for leveraging AI and generative AI in other creative ways against disinformation. These could include, for instance, automating data extraction and analysis pipelines for supporting auditing and transparency reporting, building new tools that are needed by authorities to monitor the compliance of platforms with new regulation, building new support tools for media professionals and exploring the use of AI for stimulating citizen critical thinking and awareness.

Much research has been done in the field of countering disinformation, especially on how generative AI poses new and constantly evolving challenges. Tackling these challenges requires a collaborative approach that involves a variety of sectors and stakeholders, from regulation to tech to civil society. The European projects that have joined forces in the ‘Meet the Future of AI’ context are ready to do their part in tackling parts of these challenges. We invite others to join us in the work of defending democracies and the values of free and pluralistic societies.

 

Author: Symeon Papadopoulos (CERTH-ITI)

Editor: Anna Schild (DW)
 

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.