The main challenge I see, is the difficulty to develop tools for disinformation analysis that can work at scale over the web. This task is further complicated by the abundance of synthetic content – not necessarily correlated to the dissemination of false information. Therefore, I think it is more practical to apply the algorithms developed during the project within specific scenarios of interest.
Currently a major trend is to develop AI verification tools that rely on multiple modalities, for example exploit the image together with the text describing it or the video with its audio. This can be helpful not only to increase the efficacy of the techniques but also to create more explainable tools. For example, by looking at possible inconsistencies between the video and the generated audio, the output is more interpretable and can be more useful to journalists and fact-checkers.
I hope the tools we develop for the vera.ai project can be as general as possible. My wish is that they can detect synthetic content generated by future models as well. To this end we are designing methods that are trained only on real data and look for anomalies with respect to an intrinsic model of pristine data. These methods thus possess the desirable property of detecting any type of manipulation, including new ones that may emerge in the future.
Work keeps me awake at night. I am addicted to reading and learning about new approaches, new technologies and I feel like I never have enough time.
Author / contributor: Luisa Verdoliva (University Federico II of Naples)
Editor: Anna Schild (DW)