GOAT-Bench: Safety Insights to Large Multimodal Models through Meme-Based Social Abuse

GOAT-Bench: Safety Insights to Large Multimodal Models through Meme-Based Social Abuse

28 Feb 2025 | Hongzhan Lin*, Ziyang Luo*, Bo Wang, Ruichao Yang, Jing Ma†
The paper introduces GOAT-Bench, a comprehensive benchmark dataset designed to evaluate large multimodal models (LMMs) in identifying and responding to social abuse conveyed through memes. Memes, which often contain subtle and implicit meanings, have become a significant source of online abuse. The benchmark consists of over 6,000 memes covering themes such as hate speech, sexism, and cyberbullying. The study evaluates 11 cutting-edge LMMs, including GPT-4V, CogVLM, and LLaVA-1.5, to assess their ability to detect and respond to these forms of abuse. The results show that current models still exhibit deficiencies in safety awareness, particularly in handling implicit abuse. The paper also explores the effectiveness of various strategies, such as Chain-of-Thought (CoT) prompts, in improving model performance. The findings highlight the need for further advancements in LMMs to better handle complex multimodal tasks and ensure safe artificial intelligence. The GOAT-Bench and its resources are publicly available to contribute to ongoing research in this field.The paper introduces GOAT-Bench, a comprehensive benchmark dataset designed to evaluate large multimodal models (LMMs) in identifying and responding to social abuse conveyed through memes. Memes, which often contain subtle and implicit meanings, have become a significant source of online abuse. The benchmark consists of over 6,000 memes covering themes such as hate speech, sexism, and cyberbullying. The study evaluates 11 cutting-edge LMMs, including GPT-4V, CogVLM, and LLaVA-1.5, to assess their ability to detect and respond to these forms of abuse. The results show that current models still exhibit deficiencies in safety awareness, particularly in handling implicit abuse. The paper also explores the effectiveness of various strategies, such as Chain-of-Thought (CoT) prompts, in improving model performance. The findings highlight the need for further advancements in LMMs to better handle complex multimodal tasks and ensure safe artificial intelligence. The GOAT-Bench and its resources are publicly available to contribute to ongoing research in this field.
Reach us at info@study.space