What’s the Point: Semantic Segmentation with Point Supervision

What’s the Point: Semantic Segmentation with Point Supervision

23 Jul 2016 | Amy Bearman, Olga Russakovsky, Vittorio Ferrari, and Li Fei-Fei
The paper "Semantic Segmentation with Point Supervision" by Amy Bearman, Olga Russakovsky, Vittorio Ferrari, and Li Fei-Fei addresses the trade-off between training-time annotation cost and test-time accuracy in semantic image segmentation. The authors propose a novel supervision method that leverages point-level supervision, where annotators point to objects, in addition to image-level labels. This approach is more efficient than full supervision, which requires detailed per-pixel annotations, and results in improved model accuracy. The paper introduces an objectness prior to help the model infer the extent of objects, enhancing the effectiveness of point-level supervision. Experimental results on the PASCAL VOC 2012 dataset show that the combined use of point-level supervision and objectness prior improves mean intersection over union (mIOU) by 12.9% compared to image-level supervision alone. The authors also demonstrate that models trained with point-level supervision outperform those trained with image-level, squiggle-level, or full supervision, given a fixed annotation budget. The paper includes a detailed analysis of the annotation process, error rates, and the effectiveness of different levels of supervision.The paper "Semantic Segmentation with Point Supervision" by Amy Bearman, Olga Russakovsky, Vittorio Ferrari, and Li Fei-Fei addresses the trade-off between training-time annotation cost and test-time accuracy in semantic image segmentation. The authors propose a novel supervision method that leverages point-level supervision, where annotators point to objects, in addition to image-level labels. This approach is more efficient than full supervision, which requires detailed per-pixel annotations, and results in improved model accuracy. The paper introduces an objectness prior to help the model infer the extent of objects, enhancing the effectiveness of point-level supervision. Experimental results on the PASCAL VOC 2012 dataset show that the combined use of point-level supervision and objectness prior improves mean intersection over union (mIOU) by 12.9% compared to image-level supervision alone. The authors also demonstrate that models trained with point-level supervision outperform those trained with image-level, squiggle-level, or full supervision, given a fixed annotation budget. The paper includes a detailed analysis of the annotation process, error rates, and the effectiveness of different levels of supervision.
Reach us at info@study.space