17 Apr 2019 | Tuan-Hung Vu, Himalaya Jain, Maxime Bucher, Matthieu Cord, Patrick Pérez
The paper "ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation" addresses the challenge of unsupervised domain adaptation (UDA) in semantic segmentation, particularly in "synthetic-2-real" setups where the training data is synthetic and the test data is real. The authors propose two novel methods: (i) direct entropy minimization using an entropy loss and (ii) indirect entropy minimization using an adversarial loss. These methods aim to bridge the gap between source and target domains by enforcing high prediction certainty on target predictions. The experiments demonstrate state-of-the-art performance on two challenging benchmarks, GTA5→Cityscapes and SYNTHIA→Cityscapes, and show that the approach can also be applied to object detection tasks. The paper discusses the advantages of each method, the impact of training on specific entropy ranges, and the use of class-ratio priors to improve performance in certain settings.The paper "ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation" addresses the challenge of unsupervised domain adaptation (UDA) in semantic segmentation, particularly in "synthetic-2-real" setups where the training data is synthetic and the test data is real. The authors propose two novel methods: (i) direct entropy minimization using an entropy loss and (ii) indirect entropy minimization using an adversarial loss. These methods aim to bridge the gap between source and target domains by enforcing high prediction certainty on target predictions. The experiments demonstrate state-of-the-art performance on two challenging benchmarks, GTA5→Cityscapes and SYNTHIA→Cityscapes, and show that the approach can also be applied to object detection tasks. The paper discusses the advantages of each method, the impact of training on specific entropy ranges, and the use of class-ratio priors to improve performance in certain settings.