5 Jul 2024 | Dingkang Yang, Mingcheng Li, Dongling Xiao, Yang Liu, Kun Yang, Zhaoyu Chen, Yuzheng Wang, Peng Zhai, Ke Li, Lihua Zhang
The paper "Towards Multimodal Sentiment Analysis Debiasing via Bias Purification" addresses the issue of dataset biases in Multimodal Sentiment Analysis (MSA) tasks, particularly multimodal utterance-level label bias and word-level context bias. These biases can lead to models making inaccurate predictions by relying on statistical shortcuts and spurious correlations. To mitigate these issues, the authors propose a Multimodal Counterfactual Inference Sentiment (MCIS) framework, which is based on causality rather than conventional likelihood. The MCIS framework first formulates a causal graph to identify harmful biases from already-trained models. During inference, given a factual multimodal input, MCIS imagines two counterfactual scenarios to purify and mitigate these biases. By comparing factual and counterfactual outcomes, MCIS can make unbiased decisions from biased observations. Extensive experiments on standard MSA benchmarks demonstrate the effectiveness of the proposed framework, showing significant improvements over existing models. The main contributions of the paper include identifying and disentangling label and context biases in MSA from a causal inference perspective, proposing a parameter-free and training-free MCIS framework, and providing comprehensive experimental results to validate the framework's effectiveness.The paper "Towards Multimodal Sentiment Analysis Debiasing via Bias Purification" addresses the issue of dataset biases in Multimodal Sentiment Analysis (MSA) tasks, particularly multimodal utterance-level label bias and word-level context bias. These biases can lead to models making inaccurate predictions by relying on statistical shortcuts and spurious correlations. To mitigate these issues, the authors propose a Multimodal Counterfactual Inference Sentiment (MCIS) framework, which is based on causality rather than conventional likelihood. The MCIS framework first formulates a causal graph to identify harmful biases from already-trained models. During inference, given a factual multimodal input, MCIS imagines two counterfactual scenarios to purify and mitigate these biases. By comparing factual and counterfactual outcomes, MCIS can make unbiased decisions from biased observations. Extensive experiments on standard MSA benchmarks demonstrate the effectiveness of the proposed framework, showing significant improvements over existing models. The main contributions of the paper include identifying and disentangling label and context biases in MSA from a causal inference perspective, proposing a parameter-free and training-free MCIS framework, and providing comprehensive experimental results to validate the framework's effectiveness.