Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics

Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics

24 Apr 2018 | Alex Kendall, Yarin Gal, Roberto Cipolla
This paper addresses the challenge of multi-task learning in computer vision, particularly in scene understanding tasks such as geometry and semantics. The authors propose a principled approach to combining multiple loss functions by considering the homoscedastic uncertainty of each task. This method allows the model to learn various quantities with different units or scales in both classification and regression settings. The key contributions include: 1. **Novel Multi-Task Loss Function**: A principled way to combine multiple regression and classification loss functions using homoscedastic task uncertainty. 2. **Unified Architecture**: A unified architecture for semantic segmentation, instance segmentation, and depth regression. 3. **Performance Improvement**: Demonstrates that the proposed method can learn optimal task weightings and outperform separate models trained individually on each task. The authors show that the performance of multi-task systems is highly dependent on the relative weighting between each task's loss, which is difficult to tune manually. By using homoscedastic uncertainty, the model can automatically learn the optimal weights, improving overall performance. The method is evaluated on the CityScapes dataset, demonstrating superior results compared to single-task models and other multi-task approaches. The paper also discusses the robustness of the proposed loss function to initializations and provides qualitative examples of the model's performance and failure modes.This paper addresses the challenge of multi-task learning in computer vision, particularly in scene understanding tasks such as geometry and semantics. The authors propose a principled approach to combining multiple loss functions by considering the homoscedastic uncertainty of each task. This method allows the model to learn various quantities with different units or scales in both classification and regression settings. The key contributions include: 1. **Novel Multi-Task Loss Function**: A principled way to combine multiple regression and classification loss functions using homoscedastic task uncertainty. 2. **Unified Architecture**: A unified architecture for semantic segmentation, instance segmentation, and depth regression. 3. **Performance Improvement**: Demonstrates that the proposed method can learn optimal task weightings and outperform separate models trained individually on each task. The authors show that the performance of multi-task systems is highly dependent on the relative weighting between each task's loss, which is difficult to tune manually. By using homoscedastic uncertainty, the model can automatically learn the optimal weights, improving overall performance. The method is evaluated on the CityScapes dataset, demonstrating superior results compared to single-task models and other multi-task approaches. The paper also discusses the robustness of the proposed loss function to initializations and provides qualitative examples of the model's performance and failure modes.
Reach us at info@study.space