AllSpark: Reborn Labeled Features from Unlabeled in Transformer for Semi-Supervised Semantic Segmentation

AllSpark: Reborn Labeled Features from Unlabeled in Transformer for Semi-Supervised Semantic Segmentation

14 Mar 2024 | Haonan Wang, Qixiang Zhang, Yi Li, Xiaomeng Li
The paper "AllSpark: Reborn Labeled Features from Unlabeled in Transformer for Semi-Supervised Semantic Segmentation" addresses the issue of labeled data dominance in semi-supervised semantic segmentation (SSSS) methods, which can lead to sub-optimal results. The authors propose a novel approach called AllSpark, which leverages channel-wise cross-attention to reborn labeled features from unlabeled data. This method introduces diversity into the labeled data flow, creating a more challenging learning environment. Additionally, the paper introduces a Semantic Memory (S-Mem) and a Channel Semantic Grouping strategy to ensure that unlabeled features adequately represent labeled features. AllSpark is designed to be integrated into general transformer-based segmentation models, avoiding the need for complex training pipeline designs. Extensive experiments on Pascal, Cityscapes, and COCO benchmarks demonstrate that AllSpark outperforms existing methods, providing solid performance gains across all evaluation protocols. The code and model weights are available at <https://github.com/xmed-lab/AllSpark>.The paper "AllSpark: Reborn Labeled Features from Unlabeled in Transformer for Semi-Supervised Semantic Segmentation" addresses the issue of labeled data dominance in semi-supervised semantic segmentation (SSSS) methods, which can lead to sub-optimal results. The authors propose a novel approach called AllSpark, which leverages channel-wise cross-attention to reborn labeled features from unlabeled data. This method introduces diversity into the labeled data flow, creating a more challenging learning environment. Additionally, the paper introduces a Semantic Memory (S-Mem) and a Channel Semantic Grouping strategy to ensure that unlabeled features adequately represent labeled features. AllSpark is designed to be integrated into general transformer-based segmentation models, avoiding the need for complex training pipeline designs. Extensive experiments on Pascal, Cityscapes, and COCO benchmarks demonstrate that AllSpark outperforms existing methods, providing solid performance gains across all evaluation protocols. The code and model weights are available at <https://github.com/xmed-lab/AllSpark>.
Reach us at info@study.space
Understanding AllSpark%3A Reborn Labeled Features from Unlabeled in Transformer for Semi-Supervised Semantic Segmentation