Overview of the Grand Challenge on Detecting Cheapfakes at ACM ICMR 2024

Overview of the Grand Challenge on Detecting Cheapfakes at ACM ICMR 2024

June 10–14, 2024 | Duc-Tien Dang-Nguyen, Sohail Ahmed Khan, Michael Riegler, Pål Halvorsen, Anh-Duy Tran, Minh-Son Dao, Minh-Triet Tran
The Grand Challenge on Detecting Cheapfakes at ACM ICMR 2024 aims to address the growing issue of misinformation and fake news, particularly focusing on cheapfakes—media content created using simple techniques without AI. The challenge seeks to improve the detection of out-of-context (OOC) instances, where images are misused to support false or unrelated claims. Participants are tasked with detecting conflicting image-caption triplets and fake image-caption pairs, using the COSMOS dataset, which includes around 200k images and 450k captions from various sources. The challenge has accepted six new methods, achieving private test accuracies of 72.2% for Task 1 (conflicting image-caption triplets) and 54.84% for Task 2 (fake image-caption pairs). Public test accuracies were 95.6% and 93%, respectively. These methods incorporate advanced AI models like Stable Diffusion and LLMs, highlighting the latest advancements in cheapfake detection. The challenge emphasizes both effectiveness and efficiency, evaluating models based on accuracy, average precision, F1-score, number of trainable parameters, FLOPs, and model size. The COSMOS dataset is divided into training, validation, and public test sets, with a private test set for final submissions. The challenge results show that most methods leverage generative models to enhance data diversity and improve detection accuracy. However, challenges remain, such as the need for more interpretable models and addressing privacy concerns. The organizers aim to continue the challenge to foster further research and develop more effective detection systems.The Grand Challenge on Detecting Cheapfakes at ACM ICMR 2024 aims to address the growing issue of misinformation and fake news, particularly focusing on cheapfakes—media content created using simple techniques without AI. The challenge seeks to improve the detection of out-of-context (OOC) instances, where images are misused to support false or unrelated claims. Participants are tasked with detecting conflicting image-caption triplets and fake image-caption pairs, using the COSMOS dataset, which includes around 200k images and 450k captions from various sources. The challenge has accepted six new methods, achieving private test accuracies of 72.2% for Task 1 (conflicting image-caption triplets) and 54.84% for Task 2 (fake image-caption pairs). Public test accuracies were 95.6% and 93%, respectively. These methods incorporate advanced AI models like Stable Diffusion and LLMs, highlighting the latest advancements in cheapfake detection. The challenge emphasizes both effectiveness and efficiency, evaluating models based on accuracy, average precision, F1-score, number of trainable parameters, FLOPs, and model size. The COSMOS dataset is divided into training, validation, and public test sets, with a private test set for final submissions. The challenge results show that most methods leverage generative models to enhance data diversity and improve detection accuracy. However, challenges remain, such as the need for more interpretable models and addressing privacy concerns. The organizers aim to continue the challenge to foster further research and develop more effective detection systems.
Reach us at info@study.space