Multi-Task Dense Prediction via Mixture of Low-Rank Experts

Multi-Task Dense Prediction via Mixture of Low-Rank Experts

27 May 2024 | Yuqi Yang, Peng-Tao Jiang, Qibin Hou, Hao Zhang, Jinwei Chen, Bo Li
The paper introduces a novel multi-task dense prediction method called Mixture-of-Low-Rank-Experts (MLoRE). MLoRE addresses the limitations of existing multi-task learning (MTL) methods by explicitly modeling global task relationships and reducing computational costs. The core idea is to add a generic convolution path to the standard mixture-of-experts (MoE) structure, allowing task features to share this path for explicit parameter sharing. To control the computational cost and parameter size, MLoRE leverages the low-rank format of vanilla convolutions in the expert network, inspired by LoRA. This approach reduces the number of parameters and computational costs while maintaining or improving performance. Extensive experiments on the PASCAL-Context and NYUD-v2 datasets demonstrate that MLoRE outperforms previous state-of-the-art methods in all metrics. The code for MLoRE is available at <https://github.com/YuqiYang213/MLoRE>.The paper introduces a novel multi-task dense prediction method called Mixture-of-Low-Rank-Experts (MLoRE). MLoRE addresses the limitations of existing multi-task learning (MTL) methods by explicitly modeling global task relationships and reducing computational costs. The core idea is to add a generic convolution path to the standard mixture-of-experts (MoE) structure, allowing task features to share this path for explicit parameter sharing. To control the computational cost and parameter size, MLoRE leverages the low-rank format of vanilla convolutions in the expert network, inspired by LoRA. This approach reduces the number of parameters and computational costs while maintaining or improving performance. Extensive experiments on the PASCAL-Context and NYUD-v2 datasets demonstrate that MLoRE outperforms previous state-of-the-art methods in all metrics. The code for MLoRE is available at <https://github.com/YuqiYang213/MLoRE>.
Reach us at info@study.space