2nd Workshop and Challenge on DeepFake Analysis and Detection


2nd Workshop and Challenge on DeepFake Analysis and Detection

The Workshop

Machine-generated images are becoming more and more popular in the digital world, thanks to the spread of Deep Learning models that can generate visual data like Generative Adversarial Networks, and Diffusion Models. While image generation tools can be employed for lawful goals (e.g., to assist content creators, generate simulated datasets, or enable multi-modal interactive applications), there is a growing concern that they might also be used for illegal and malicious purposes, such as the forgery of natural images, the generation of images in support of fake news, misogyny or revenge porn. While the results obtained in the past few years contained artefacts which made generated images easily recognizable, today’s results are way less recognizable from a pure perceptual point of view. In this context, assessing the authenticity of fake images becomes a fundamental goal for security and for guaranteeing a degree of trustworthiness of AI algorithms. There is a growing need, therefore, to develop automated methods which can assess the authenticity of images (and, in general, multimodal content), and which can follow the constant evolution of generative models, which become more realistic over time.

The second Workshop and Challenge on DeepFake Analysis and Detection (DFAD) focuses on the development of benchmarks and tools for Fake data Understanding and Detection, with the final goal of protecting from visual disinformation and misuse of generated images and text, and to monitor the progress of existing and proposed solutions for detection. Moreover, with the growing amount of generation models, the challenge of generated content detection should be generalizable to content generated by models that were unseen during the training phase. It fosters the submission of works that identify novel ways of understanding and detecting fake data, especially through new machine learning approaches capable of mixing syntactic and perceptive analysis.

The Challenge

In parallel to soliciting the submission of relevant scientific works, the Workshop hosts a competition on deepfake detection. This is organised with the support of the ELSA project - the European Lighthouse on Secure and Safe AI, which builds on and extends the existing internationally recognized and excellently positioned ELLIS (European Laboratory for Learning and Intelligent Systems) network of excellence. The objective of the challenge is to monitor and evaluate the development of algorithms for deep fake detection, in terms of efficacy and explainability.

Back to News