All in all, both vera.ai teams together were very successful in this complex task of credibility detection, which is highly relevant for the project. Motivated by positive results, we continue with other experiments (including the experimentation with the state-of-the-art large language models, such as ChatGPT or GPT4).
We expect that new findings will lead to additional interesting research outcomes as well as to deployment of the best-performing models as a part of credibility assessment service, which we plan to deliver in the vera.ai project. Such service will help media professionals to automatically and effectively obtain credibility analysis of various online content (for example, to pre-screen whether the content is not credible and should be fact-checked or quite opposite, the content is credible and can be cited).
Authors: Branislav Pecher (KInIT), Ivan Srba (KInIT), Olesya Razuvayevskaya (USFD)
Editor: Anna Schild (DW)