In addition to the above, the contributors also provided inputs on important points of the consultation, mostly related to the relationship with private stakeholders.
- A Wide Access to Data for Research on AI Disinformation: A significant concern raised are the restrictive data access policies of VLOPs and VLOSEs, which currently hamper efforts by researchers and verification professionals to study and counter AI-generated disinformation. The contributors call for improved data access to enable independent research into the effectiveness of platform measures against disinformation, including AI-generated content.
- Recommendations on Election-Specific Risk Mitigation Measures: The contributors agree with the recommended best practices but suggest more precise language to avoid conflicts with freedom of expression. An excessive delegation of content regulation to private companies is not considered advisable. In turn, the complexity of disinformation analysis is emphasized. Furthermore, the contributors advocate for more data availability surrounding political advertising, improvement in real-time functionality of APIs, and inclusion of a broader range of actors beyond official political actors in risk assessment.
- Factors for Detecting Systemic Risks and Additional Mitigation Measures: The response highlights the cross-platform nature of disinformation, especially around elections. It recommends additional measures for cross-platform action and data sharing, suggests political ads be fact-checked before publication, and calls for careful auditing of the independence of subcontractors tasked with content moderation. The importance of providing users with contextual information and transparently addressing moderation biases is also stressed.
- Effectiveness of Risk Mitigation Measures: The contributors suggest comparing engagement metrics across political posts and ads to measure effectiveness. The need for independent verification by researchers and detailed data provision by VLOPs and VLOSEs to enable such verification is emphasized.
- Post-Election Analysis and Transparency: The responses stress the importance of transparency and data availability, especially regarding political ad spending and the creation of comprehensive datasets of moderated disinformation content. The vera.ai contributors to the consultation advocate for comprehensive election-related datasets to enable research into the effectiveness, coverage, and fairness of moderation measures by all VLOPs / VLOSEs.
Overall, the contributors commend the guidelines' scope and balance but advise to seek enforceable obligations rather than “best effort” recommendations. It is furthermore called for more user involvement in the processes outlined by the guidelines, including the suggestion of mechanisms for improving rapid response to election-related incidents on VLOPs and VLOSEs.
NB: The evidence and argumentation behind the above recommendations is provided in the detailed response to the consultation, which has been added in full below.
Authors: Alexandre Alaphilippe & Joe McNamee (EU DisinfoLab) with contributions from individuals of selected vera.ai consortium partners
Editors: Jochen Spangenberg (DW) & Symeon Papadopoulos (ITI-CERTH) & Kalina Bontcheva (USFD)