M³oE: Multi-Domain Multi-Task Mixture-of-Experts Recommendation Framework

M³oE: Multi-Domain Multi-Task Mixture-of-Experts Recommendation Framework

July 14-18, 2024 | Zijian Zhang, Shuchang Liu, Jiaao Yu, Qingpeng Cai, Xiangyu Zhao, Chunxu Zhang, Ziru Liu, Qidong Liu, Hongwei Zhao, Lantao Hu, Peng Jiang, Kun Gai
M³oE: A Multi-Domain Multi-Task Mixture-of-Experts Recommendation Framework This paper proposes M³oE, a novel framework for multi-domain multi-task recommendation. The framework integrates multi-domain information, maps knowledge across domains and tasks, and optimizes multiple objectives. It leverages three mixture-of-experts modules to learn common, domain-aspect, and task-aspect user preferences, enabling disentangled modeling of complex dependencies between domains and tasks. A two-level fusion mechanism is designed to precisely control feature extraction and fusion across diverse domains and tasks. AutoML is applied to enhance adaptability and enable dynamic structure optimization. Extensive experiments on two benchmark datasets demonstrate M³oE's superior performance compared to state-of-the-art methods. The framework addresses the MDMT seesaw problem, where domain and task interactions pose challenges for information transfer and objective balancing. M³oE achieves consistent improvements over existing methods by effectively disentangling and integrating multi-domain and multi-task knowledge. The framework's adaptability is further enhanced by AutoML, which allows dynamic structure optimization. The implementation code is available to ensure reproducibility. The framework is the first to solve multi-domain multi-task recommendation self-adaptively. The results show that M³oE outperforms existing methods in terms of performance and effectiveness in knowledge disentanglement and integration. The framework is designed to handle complex scenarios where multiple domains and tasks are involved simultaneously. The framework's architecture includes a domain representation extraction layer, a multi-view expert learning layer, and an MDMT objective prediction layer. The domain representation extraction layer integrates domain-specific and common information. The multi-view expert learning layer extracts and integrates information for specific domains and tasks. The MDMT objective prediction layer generates separate outputs for each domain-task pair. The framework's two-level fusion mechanism controls information aggregation for each domain and task. The framework's adaptability is further enhanced by AutoML, which allows dynamic structure optimization. The framework's performance is evaluated on two benchmark datasets, demonstrating its effectiveness in solving the MDMT seesaw problem. The framework's results show that it outperforms existing methods in terms of performance and effectiveness in knowledge disentanglement and integration. The framework is designed to handle complex scenarios where multiple domains and tasks are involved simultaneously. The framework's architecture includes a domain representation extraction layer, a multi-view expert learning layer, and an MDMT objective prediction layer. The domain representation extraction layer integrates domain-specific and common information. The multi-view expert learning layer extracts and integrates information for specific domains and tasks. The MDMT objective prediction layer generates separate outputs for each domain-task pair. The framework's two-level fusion mechanism controls information aggregation for each domain and task. The framework's adaptability is further enhanced by AutoML, which allows dynamic structure optimization. The framework's performance is evaluated on two benchmark datasets, demonstrating its effectiveness in solving the MDMT seesaw problem. The frameworkM³oE: A Multi-Domain Multi-Task Mixture-of-Experts Recommendation Framework This paper proposes M³oE, a novel framework for multi-domain multi-task recommendation. The framework integrates multi-domain information, maps knowledge across domains and tasks, and optimizes multiple objectives. It leverages three mixture-of-experts modules to learn common, domain-aspect, and task-aspect user preferences, enabling disentangled modeling of complex dependencies between domains and tasks. A two-level fusion mechanism is designed to precisely control feature extraction and fusion across diverse domains and tasks. AutoML is applied to enhance adaptability and enable dynamic structure optimization. Extensive experiments on two benchmark datasets demonstrate M³oE's superior performance compared to state-of-the-art methods. The framework addresses the MDMT seesaw problem, where domain and task interactions pose challenges for information transfer and objective balancing. M³oE achieves consistent improvements over existing methods by effectively disentangling and integrating multi-domain and multi-task knowledge. The framework's adaptability is further enhanced by AutoML, which allows dynamic structure optimization. The implementation code is available to ensure reproducibility. The framework is the first to solve multi-domain multi-task recommendation self-adaptively. The results show that M³oE outperforms existing methods in terms of performance and effectiveness in knowledge disentanglement and integration. The framework is designed to handle complex scenarios where multiple domains and tasks are involved simultaneously. The framework's architecture includes a domain representation extraction layer, a multi-view expert learning layer, and an MDMT objective prediction layer. The domain representation extraction layer integrates domain-specific and common information. The multi-view expert learning layer extracts and integrates information for specific domains and tasks. The MDMT objective prediction layer generates separate outputs for each domain-task pair. The framework's two-level fusion mechanism controls information aggregation for each domain and task. The framework's adaptability is further enhanced by AutoML, which allows dynamic structure optimization. The framework's performance is evaluated on two benchmark datasets, demonstrating its effectiveness in solving the MDMT seesaw problem. The framework's results show that it outperforms existing methods in terms of performance and effectiveness in knowledge disentanglement and integration. The framework is designed to handle complex scenarios where multiple domains and tasks are involved simultaneously. The framework's architecture includes a domain representation extraction layer, a multi-view expert learning layer, and an MDMT objective prediction layer. The domain representation extraction layer integrates domain-specific and common information. The multi-view expert learning layer extracts and integrates information for specific domains and tasks. The MDMT objective prediction layer generates separate outputs for each domain-task pair. The framework's two-level fusion mechanism controls information aggregation for each domain and task. The framework's adaptability is further enhanced by AutoML, which allows dynamic structure optimization. The framework's performance is evaluated on two benchmark datasets, demonstrating its effectiveness in solving the MDMT seesaw problem. The framework
Reach us at info@study.space