MAD 2024 Workshop

From 10-13 June 2024, the ACM International Conference on Multimedia Retrieval (ICMR’24) takes place in Phuket, Thailand. In the context of this prestigious event, vera:ai project partners are co-organising the 3rd ACM International Workshop on Multimedia AI against Disinformation (MAD’24). The workshop runs on 10 June, day 1 of ICMR'24.

Context

Disinformation often spreads easily in social networks and is frequently propagated by social media actors and network communities to achieve specific (mostly malevolent) objectives. This can have negative effects on users’ lives as well as societies as a whole as it, for example, can distort points of view regarding topics, such as politics, health or religion. Ultimately, it can have a negative effect on the very fabric of democratic societies. That is why it can (and should) be countered with an effective combination of human and technical means.

Disinformation campaigns are increasingly powered by advanced AI techniques and a lot of effort was put into the detection of fake content. While important, this is only a piece of the puzzle if one wants to address the phenomenon in a comprehensive manner.

Whether a piece of information is considered fake or true often depends on the temporal and cultural contexts in which it is interpreted. This is, for instance, the case with scientific knowledge, which evolves at a fast pace, and whose usage in mainstream content should be updated accordingly.

Multimedia content is often at the core of AI-assisted disinformation campaigns. Their impact is often directly related to the perceived credibility of their content.

Significant advances related to the automatic generation/manipulation of each modality were obtained with the introduction of dedicated deep learning techniques. Visual content can be tampered with in order to produce manipulated but realistic versions of it. Synthesized speech has attained a high quality level and is more and more difficult to distinguish from the actual voice. Deep language models, learned on top of huge corpora, allow the generation of text which resembles that written by humans. Combining these advances has the potential to boost the effectiveness of disinformation campaigns.

This combination of what has been portrayed above is an open research topic which needs to be addressed in order to reduce the effects of disinformation campaigns. That is why the MAD series of workshops exists.

Call for Papers

This MAD'24 workshop welcomes contributions related to different aspects of AI-powered disinformation.

Topics of interest include, but are not limited to:

  • Disinformation detection in multimedia content (e.g., video, audio, texts, images);
  • Multimodal verification methods;
  • Synthetic and manipulated media detection;
  • Multimedia forensics;
  • Disinformation spread and effects in social media;
  • Analysis of disinformation campaigns in societally-sensitive domains;
  • Robustness of media verification against adversarial attacks and real-world complexities;
  • Fairness and non-discrimination of disinformation detection in multimedia content;
  • Explaining disinformation /disinformation detection technologies for non-expert users;
  • Temporal and cultural aspects of disinformation;
  • Dataset sharing and governance in AI for disinformation;
  • Datasets for disinformation detection and multimedia verification;
  • Open resources, e.g., datasets, software tools;
  • Multimedia verification systems and applications;
  • System fusion, ensembling and late fusion techniques;
  • Benchmarking and evaluation frameworks.

Workshop enablers and support

The workshop is supported under H2020 projects AI4Media (A European Excellence Centre for Media, Society and Democracy), vera.ai (VERification Assisted by Artificial Intelligence), and the Horizon Europe project AI4Debunk (Participative Assistive AI-powered Tools for Supporting Trustworthy Online Activity of Citizens and Debunking Disinformation).

Authors: Luca Cuccovillo & Stefanie Theiss (Fraunhofer IDMT)

Editor: Jochen Spangenberg (DW)

 

Note: this article first appeared on the MAD 2024 workshop website. It was slightly adapted and edited for publication here. See also the ICMR MAD'24 workshop website for further details

 

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.