There are a few challenges and trends that we see in the realm of disinformation analysis and AI-supported verification tools and services. One challenge is creating tools that not only make a difference in the work routines of journalists but also support evidence-based work with a clear understanding from the public eye of how verification was conducted. Another challenge is avoiding the creation of additional algorithmic blackboxes to tackle disinformation, as there are already concerns about the lack of transparency on existing platforms. To address these challenges, we are committed to work collaboratively with the vera.ai team to develop tools that prioritise transparency, accuracy, and the needs of journalists and newsrooms.
My vision for vera.ai in 2025 is that the platform will have successfully developed and implemented innovative tools that significantly facilitate the work of newsrooms and journalists. I hope that the platform's success will be demonstrated by its widespread adoption outside of the consortium, similar to past successful projects like InVid/WeVerify. Through our ongoing involvement within vera.ai, I am confident that we can contribute to the development of these tools and ultimately help combat the spread of disinformation.
Author / contributor: Alexandre Alaphilippe (EU DisinfoLab)
Editor: Anna Schild (DW)