For over 18 months – half of the project's runtime – the vera.ai consortium has been doing research in the field of content analysis and disinformation detection, developed new tools and services, and has spread resepctive results. In mid-March 2024 it was time to publish a set of three deliverables that present and summarize the research and development work that has been done in the respective work packages so far. Here, we provide a short overview and link to the respective Deliverables which are all publicly available.
Deliverable D3.1 gives “insights into the advancements made in multilingual and multimodal trustworthy AI methods for content analysis, enhancement, and evidence retrieval. Additionally, it presents the outcomes of extensive research efforts aimed at addressing the challenges associated with disinformation detection and content verification, providing a comprehensive report on the methods developed and experiments conducted, as well as [further] preliminary work“.
Check out the submitted deliverable.
This deliverable “presents insights and methodologies focused on the detection, tracking, and impact measurement of disinformation narratives and campaigns across diverse modalities, languages, and online social media platforms.”
Deliverable D5.2 describes “[t]he user-facing tools … [that] act as bridges between the research on AI-based tools from vera.ai’s WP3 and WP4 and the final output delivered to end-users. This is realised through integrating the AI components into verification platforms. In this way, target users’ workflows can benefit from the novel AI methods for detection and analysis of misinformation content across modalities (text, image, video, and audio) as well as from the advanced techniques for detecting disinformation campaigns and narratives.“
On the deliverables subpage we provide a list of all required submissions. In case they are publicly available, a link to the respective document is inserted.
Author: Anna Schild (DW)
Editor: Jochen Spangenberg (DW)