This paper proposes a novel gradient strategy called Mirror Gradient (MG) to enhance the robustness of multimodal recommender systems. The authors analyze the challenges faced by these systems, such as data sparsity and cold-start issues, and identify the risks associated with multimodal information inputs, including inherent noise and information adjustment risks. These risks can significantly affect the performance of recommendation models. To address these challenges, the authors propose MG, which implicitly enhances the model's robustness during the optimization process by guiding the model towards flat local minima. Flat local minima are more robust to input distribution shifts, making them ideal for improving the stability of recommendation models.
The authors provide strong theoretical evidence and conduct extensive empirical experiments to demonstrate the effectiveness of MG across various multimodal recommendation models and benchmarks. They also show that MG can complement existing robust training methods and be easily extended to diverse advanced recommendation models. The proposed MG is compatible with various optimizers and robust recommendation techniques, and it has been shown to improve the performance of both multimodal and non-multimodal recommendation systems.
The authors also conduct an ablation study to evaluate the impact of different parameters on the performance of MG. They find that the choice of parameters does not significantly affect the model's performance, with 3 being a suitable value for the interval parameter. The results show that MG consistently delivers a noticeable enhancement in recommendation accuracy across a range of optimizers and recommendation systems.
The authors conclude that MG is a promising new and fundamental paradigm for training multimodal recommender systems. The code for MG is available at https://github.com/Qrange-group/Mirror-Gradient.This paper proposes a novel gradient strategy called Mirror Gradient (MG) to enhance the robustness of multimodal recommender systems. The authors analyze the challenges faced by these systems, such as data sparsity and cold-start issues, and identify the risks associated with multimodal information inputs, including inherent noise and information adjustment risks. These risks can significantly affect the performance of recommendation models. To address these challenges, the authors propose MG, which implicitly enhances the model's robustness during the optimization process by guiding the model towards flat local minima. Flat local minima are more robust to input distribution shifts, making them ideal for improving the stability of recommendation models.
The authors provide strong theoretical evidence and conduct extensive empirical experiments to demonstrate the effectiveness of MG across various multimodal recommendation models and benchmarks. They also show that MG can complement existing robust training methods and be easily extended to diverse advanced recommendation models. The proposed MG is compatible with various optimizers and robust recommendation techniques, and it has been shown to improve the performance of both multimodal and non-multimodal recommendation systems.
The authors also conduct an ablation study to evaluate the impact of different parameters on the performance of MG. They find that the choice of parameters does not significantly affect the model's performance, with 3 being a suitable value for the interval parameter. The results show that MG consistently delivers a noticeable enhancement in recommendation accuracy across a range of optimizers and recommendation systems.
The authors conclude that MG is a promising new and fundamental paradigm for training multimodal recommender systems. The code for MG is available at https://github.com/Qrange-group/Mirror-Gradient.