Three projects joining forces to bring you MAD'24

On 10 June 2024, EU projects AI4Media, veraAI and AI4Debunk co-organized a joint workshop on Multimedia AI against Disinformation (MAD’24). The event is now in its 3rd edition and was held in Phuket, Thailand. Here's a summary of what you have missed if you didn't manage to get there in person.

Welcome and starting off

After a warm welcome by the workshop organizers (among them the author of this text, Luca Cuccovillo, who is part of both the AI4Media and the veraAI teams), Prof. Duc Tien Dang Nguyen started the workshop with his keynote talk on “Multimedia AI vs Information Disorder: A Journey of Discovery”. In his talk, he discussed several case studies conducted in collaboration with fact-checkers and journalists in the Nordic countries. He focused on the  user needs of fact-checkers, their workflows, and the tools they use  to verify visual content in news, with the goal of providing insights for future developments in multimedia AI.

Session reports

The first session on “Synthetic Audio Generation and Detection” began with an introduction to audio deepfake generation for non-experts by Fraunhofer SIT and ATHENE, followed by a guide to audio deepfake detection techniques for non-experts also authored by Fraunhofer SIT and ATHENE, and closed by a novel audio transformer by Fraunhofer IDMT, aimed at detecting synthetic speech recordings using the Benford’s law principle.

The second session on “Evaluation of AI Models” continued the workshop with the presentation of a novel framework for benchmarking synthetic image detection methods by CERTH-ITI, followed by an in-depth statistical analysis and comparison of feature descriptors suitable for training synthetic image detectors by TU-Berlin. The two presentations provided a comprehensive analysis of the current landscape of synthetic image detection, complementing the initial discussion on synthetic speech detection with precious information from the visual domain.

The lunch break was followed by an exciting invited call by Matyas Bohacek, who presented an extensive report of his endeavor to create a photorealistic deepfake of a real news anchor using only open-source tools and models, limited data from the internet, and a consumer laptop. Having such a young scholar share his thrill and excitement about creating a photorealistic deepfake, and of the reaction this example generated among the news anchor’s colleagues who collaborated in this experiment, was extremely stimulating and welcomed by the audience. 

The third session on “AI for Video and Image Analysis” began with a presentation by CERTH-ITI on the detection of discrepancies between the acoustic and visual scenes of a manipulated video file. CERTH-ITI also presented the second work of the session, addressing methods for explainable AI applied to image detection, and their capability of discerning which characteristics of the input image led to the detection of synthesis traces. Finally, UPB presented their work on improving the generalization of synthetic image detection models by continuously using adversarial attacks to generate new deepfakes that a detector might not be trained to recognize, starting only from real samples.

The workshop ended with a fourth and final session on “AI for Automated Fact-Checking” which included the presentation of the CREDULE dataset and the EVVER-Net architecture for early misinformation detection by CERTH-ITI, followed by a description of a novel NewsPolyML multilingual dataset for fake news assessment curated by TU-Berlin. Finally, the workshop closed with an in-depth evaluation of explainable AI features tailored for claim detection, which was conducted by DFKI with an extensive crowdsourcing experiment.

Happy and content organizers

We - the organizers - were very pleased with the turnout, despite the (very tempting 😉) sandy beach of the Phuket laguna. Everyone had the chance to make new friends, and the program included diverse and exciting contributions, covering disinformation detection from multiple angles. As in the previous editions it was intense (very much!) but 100 per cent worth the time and effort. 

Where next (for MAD'25)?

Are you as curious as the participants where the next MAD Workshop might take place (in other words: MAD'25)?? Below we have generated a trivia picture of the potentially next conference location for you 😉.

MAD'25 location, using the prompt “expressive oil painting of the most renown landmark in (xxx)"Luca Cuccovillo and the respective image generator

We look forward to welcoming as many of you as possible at MAD'25. 

Conference material available

If you want to check out presentations and contributions from MAD'24: look no further than the ACM website where all this is available online.

 

Author: Luca Cuccovillo (Fraunhofer IDMT)

Editors: Jochen Spangenberg (DW), Stefanie Theiß (Fraunhofer IDMT)

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.