Mitigation of systemic risks in the disinformation space - a standpoint from vera.ai members

An important part of the EU’s Digital Services Act (DSA) is the mitigation of risks (so called “systemic risks”) that are inherent in the provision of services by certain ‘very large online platforms’ (VLOPs). VLOPs are required to take reasonable, proportionate and effective measures to mitigate systemic risks.

In order to help VLOPs understand what is expected of them, the European Commission issues guidelines on appropriate measures and best practices. When preparing such guidelines, the Commission must organize consultations.

One risk recognized by the DSA is the risk to electoral processes.

What this means in the context of the upcoming elections

Due to inter alia the upcoming European elections, the Commission felt that guidelines on the systemic risks to electoral processes were needed. It therefore produced draft guidelines, together with a set of questions in late March 2024.

Following this call to action and requesting feedback, individuals from a number of vera.ai consortium partners provided their views on the matter to the EC (the full response can be found at the end of this contribution). 

In this article, some of the contributions made are recapped and shared. 

EC GuidelinesEuropean Commission

Recommendations


Firstly, the vera.ai contributors focused on the role of AI in relation to systemic risks, and concentrated on three points addressed in more detail below:

  1. Generative AI and Disinformation: The contributors point out the rationale to consider AI-generated disinformation as a systemic risk, this not only being the case around elections. It is recommended that guidelines include more emphasis on developing and deploying detection methods for AI-generated content, acknowledging that AI-generated textual narratives are challenging to detect and may be used to spread disinformation at scale. This recommendation aligns with the challenges outlined in the White Paper entitled "Generative AI and Disinformation: Recent Advances, Challenges, and Opportunities" that was written by individuals involved in the vera.ai project as well as projects AI4Media, TITAN, and AI4Trust.
  2. Watermarking is not a bulletproof solution: While watermarking is mentioned as a useful tool for identifying AI-generated content, the contributors underscore its technical limitations and the difficulty in applying it to AI-generated textual disinformation. The necessity of more effective detection mechanisms that can protect users and society from AI-generated disinformation campaigns needs to be highlighted.
  3. Permanent risk assessments: The very rapid development of an AI ecosystem will probably lead to a myriad of new systems / tools / methodologies being used by an increasing number of stakeholders. Therefore, the contributors recommend continuous assessment and benchmarking of detection methods for content that is generated using different AI models. The creation of "organic" datasets of AI-generated media and partnerships between VLOPs / VLOSEs (Very Large Search Engines) and the academic sector to identify effective detection methods is advocated. The goal of this would be to ensure that risk mitigation measures keep pace with technological developments in AI.

In addition to the above, the contributors also provided inputs on important points of the consultation, mostly related to the relationship with private stakeholders.

  1. A Wide Access to Data for Research on AI Disinformation: A significant concern raised are the restrictive data access policies of VLOPs and VLOSEs, which currently hamper efforts by researchers and verification professionals to study and counter AI-generated disinformation. The contributors call for improved data access to enable independent research into the effectiveness of platform measures against disinformation, including AI-generated content.
  2. Recommendations on Election-Specific Risk Mitigation Measures: The contributors agree with the recommended best practices but suggest more precise language to avoid conflicts with freedom of expression. An excessive delegation of content regulation to private companies is not considered advisable. In turn, the complexity of disinformation analysis is emphasized. Furthermore, the contributors advocate for more data availability surrounding political advertising, improvement in real-time functionality of APIs, and inclusion of a broader range of actors beyond official political actors in risk assessment.
  3. Factors for Detecting Systemic Risks and Additional Mitigation Measures: The response highlights the cross-platform nature of disinformation, especially around elections. It recommends additional measures for cross-platform action and data sharing, suggests political ads be fact-checked before publication, and calls for careful auditing of the independence of subcontractors tasked with content moderation. The importance of providing users with contextual information and transparently addressing moderation biases is also stressed.
  4. Effectiveness of Risk Mitigation Measures: The contributors suggest comparing engagement metrics across political posts and ads to measure effectiveness. The need for independent verification by researchers and detailed data provision by VLOPs and VLOSEs to enable such verification is emphasized.
  5. Post-Election Analysis and Transparency: The responses stress the importance of transparency and data availability, especially regarding political ad spending and the creation of comprehensive datasets of moderated disinformation content. The vera.ai contributors to the consultation advocate for comprehensive election-related datasets to enable research into the effectiveness, coverage, and fairness of moderation measures by all VLOPs / VLOSEs.

Overall, the contributors commend the guidelines' scope and balance but advise to seek enforceable obligations rather than “best effort” recommendations. It is furthermore  called for more user involvement in the processes outlined by the guidelines, including the suggestion of mechanisms for improving rapid response to election-related incidents on VLOPs and VLOSEs.

NB: The evidence and argumentation behind the above recommendations is provided in the detailed response to the consultation, which has been added in full below. 

Authors: Alexandre Alaphilippe & Joe McNamee (EU DisinfoLab) with contributions from individuals of selected vera.ai consortium partners

Editors: Jochen Spangenberg (DW) & Symeon Papadopoulos (ITI-CERTH) & Kalina Bontcheva (USFD)

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.