Assisting automated fact-checking via external evidences

The information age, especially with the rise of online platforms and social media, has led to new forms of misinformation and disinformation, such as DeepFakes and multimodal misinformation. This makes it hard for people to trust what they read online. 

Fact-checking organizations and dedicated outlets like Snopes, PolitiFact, Reuters, Deutsche Welle and AFP fact-checks have emerged to verify claims in news articles and social media posts. However, manual fact-checking takes a lot of time and can usually not keep up with the quick spread of false information. Automated fact-checking (AFC), in turn and partly as a result of the above limitations, is being developed to help with this. 

Researchers are using new technology like deep learning and computer vision to create tools that will automate parts of the fact-checking process. These tools can detect important claims and find evidence on the web to support or refute them. An example of a claim, evidence and retrieval is shown in the figure below.

An example of a claim, evidence collection and retrievalClaim and verdict source: Snopes.

Developing and training effective AFC systems

Within the context of vera.ai, we first identified certain challenges of training AFC systems. 

To develop and train effective AFC systems, collected evidence must meet specific criteria and not be leaked from existing fact-checking articles. The problem of leaked evidence arises when information from previously fact-checked articles is used during training, making the model less effective at handling new misinformation. Additionally, external information retrieved by the model must be credible to avoid feeding it with unreliable and false data. These issues are crucial for achieving realistic and practical fact-checking results. 

To address these challenges, we developed the “CREDible, Unreliable, or LEaked” (CREDULE) dataset by modifying, merging, and extending previous datasets such as MultiFC, PolitiFact, PUBHEALTH, NELA-GT, Fake News Corpus, and Getting Real About Fake News. It contains 91,632 samples from 2016 to 2022, equally distributed across three classes: 

  • “Credible,” 
  • “Unreliable,” and 
  • “Fact-checked” (Leaked). 

The dataset includes short texts (titles) and long texts (full articles), along with metadata such as date, domain, URL, topic, and credibility scores.

Promising performances

Having created the CREDULE dataset, we then developed the EVidence VERification Network (EVVER-Net), a neural network designed to detect leaked and unreliable evidence during the evidence retrieval process. EVVER-Net can be integrated into an AFC pipeline to ensure that only credible information is used during training. The model leverages large pre-trained transformer-based text encoders and integrates information from credibility and bias scores.

EVVER-Net demonstrated impressive performances, achieving up to 89.0% accuracy without credibility and 94.4% with credibility scores.

In order to examine the evidence of widely used AFC datasets, we then applied the EVVER-Net model to widely used AFC datasets, including LIAR-Plus, MOCHEG, FACTIFY, NewsCLIPpings, and VERITE. EVVER-Net identified up to 98.5% of items in LIAR-Plus, 95.0% in the "Refute" class of FACTIFY, and 83.6% in MOCHEG as leaked evidence. For instance, in the FACTIFY dataset, the claim "Microsoft bought Sony for $121 billion" was accompanied by evidence sourced from a previously fact-checked article, limiting the model's ability to detect and verify new claims that have not been fact-checked.

  

NB 1: More (technical) details about what has been briefly illustrated above can be found in the paper “Credible, Unreliable or Leaked?: Evidence verification for enhanced automated fact-checking” by Zacharias Chrysidis, Stefanos-Iordanis Papadopoulos, Symeon Papadopoulos and Panagiotis C. Petrantonakis, which has been presented at ICMR’s 3rd ACM International Workshop on Multimedia AI against Disinformation. 

NB 2: This is a condensed and further edited version of an article that first appeared on the MeVer team's website. You may also want to refer to the original article for more details.

Authors: Stefanos-Iordanis Papadopoulos, Olga Papadopoulou, Symeon Papadopoulos (all MeVer team at CERTH-ITI) 

Editor: Jochen Spangenberg (DW)

 

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.