ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation

ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation

18 Apr 2016 | Di Lin1*, Jifeng Dai2 Jiaya Jia1 Kaiming He2 Jian Sun2
ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation This paper proposes a method for semantic segmentation using scribble annotations. The method uses a graphical model to propagate information from scribbles to unmarked pixels and learn network parameters. The algorithm is based on a graphical model that jointly propagates information from scribbles to unmarked pixels and learns network parameters. The method is tested on the PASCAL VOC dataset and shows competitive results. Scribbles are also favored for annotating stuff (e.g., water, sky, grass) that has no well-defined shape, and our method shows excellent results on the PASCAL-CONTEXT dataset thanks to extra inexpensive scribble annotations. Our scribble annotations on PASCAL VOC are available at http://research.microsoft.com/en-us/um/people/jifdai/downloads/scribble_sup. The method is based on weakly-supervised learning, which is a middle ground between image-level supervision and box-level supervision. The method uses a graphical model to propagate information from scribbles to unmarked pixels, based on spatial constraints, appearance, and semantic content. A fully convolutional network (FCN) is learned, which is supervised by the propagated labels and in turn provides semantic predictions for the graphical model. The method is evaluated on the PASCAL VOC dataset and shows higher accuracy than other weakly-supervised methods. The method is also tested on the PASCAL-CONTEXT dataset and shows higher accuracy than previous methods that are not able to harness scribbles. The method uses a graphical model to propagate information from scribbles to unmarked pixels. The graphical model is built on the superpixels of a training image. A vertex in the graph represents a super-pixel, and an edge in the graph represents the similarity between two super-pixels. The method uses a fully convolutional network (FCN) to learn the network parameters. The method is evaluated on the PASCAL VOC dataset and shows higher accuracy than other weakly-supervised methods. The method is also tested on the PASCAL-CONTEXT dataset and shows higher accuracy than previous methods that are not able to harness scribbles. The method is evaluated on the PASCAL VOC dataset and shows higher accuracy than other weakly-supervised methods. The method is also tested on the PASCAL-CONTEXT dataset and shows higher accuracy than previous methods that are not able to harness scribbles. The method is also tested on the PASCAL-CONTEXT dataset and shows higher accuracy than previous methods that are not able to harness scribbles. The method is also tested on the PASCAL-CONTEXT dataset and shows higher accuracy than previous methods that are not able to harness scribbles.ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation This paper proposes a method for semantic segmentation using scribble annotations. The method uses a graphical model to propagate information from scribbles to unmarked pixels and learn network parameters. The algorithm is based on a graphical model that jointly propagates information from scribbles to unmarked pixels and learns network parameters. The method is tested on the PASCAL VOC dataset and shows competitive results. Scribbles are also favored for annotating stuff (e.g., water, sky, grass) that has no well-defined shape, and our method shows excellent results on the PASCAL-CONTEXT dataset thanks to extra inexpensive scribble annotations. Our scribble annotations on PASCAL VOC are available at http://research.microsoft.com/en-us/um/people/jifdai/downloads/scribble_sup. The method is based on weakly-supervised learning, which is a middle ground between image-level supervision and box-level supervision. The method uses a graphical model to propagate information from scribbles to unmarked pixels, based on spatial constraints, appearance, and semantic content. A fully convolutional network (FCN) is learned, which is supervised by the propagated labels and in turn provides semantic predictions for the graphical model. The method is evaluated on the PASCAL VOC dataset and shows higher accuracy than other weakly-supervised methods. The method is also tested on the PASCAL-CONTEXT dataset and shows higher accuracy than previous methods that are not able to harness scribbles. The method uses a graphical model to propagate information from scribbles to unmarked pixels. The graphical model is built on the superpixels of a training image. A vertex in the graph represents a super-pixel, and an edge in the graph represents the similarity between two super-pixels. The method uses a fully convolutional network (FCN) to learn the network parameters. The method is evaluated on the PASCAL VOC dataset and shows higher accuracy than other weakly-supervised methods. The method is also tested on the PASCAL-CONTEXT dataset and shows higher accuracy than previous methods that are not able to harness scribbles. The method is evaluated on the PASCAL VOC dataset and shows higher accuracy than other weakly-supervised methods. The method is also tested on the PASCAL-CONTEXT dataset and shows higher accuracy than previous methods that are not able to harness scribbles. The method is also tested on the PASCAL-CONTEXT dataset and shows higher accuracy than previous methods that are not able to harness scribbles. The method is also tested on the PASCAL-CONTEXT dataset and shows higher accuracy than previous methods that are not able to harness scribbles.
Reach us at info@study.space
[slides and audio] ScribbleSup%3A Scribble-Supervised Convolutional Networks for Semantic Segmentation