DensePose: Dense Human Pose Estimation In The Wild

DensePose: Dense Human Pose Estimation In The Wild

1 Feb 2018 | Riza Alp Güler*, Natalia Neverova, Iasonas Kokkinos
This paper introduces DensePose-COCO, a large-scale dataset for dense human pose estimation, and presents DensePose-RCNN, a deep learning model that can densely regress part-specific UV coordinates within human regions at multiple frames per second. The authors first gather dense correspondences for 50K persons in the COCO dataset through an efficient annotation pipeline. They then use this dataset to train CNN-based systems that deliver dense correspondence in real-world scenarios, including background, occlusions, and scale variations. To improve the effectiveness of the training set, they train an 'inpainting' network to fill in missing ground truth values. The paper compares fully-convolutional networks and region-based models, finding that the latter is superior. They further enhance accuracy through cascading, achieving highly accurate results in real-time. The supplementary materials and videos are available on the project page <http://densepose.org>.This paper introduces DensePose-COCO, a large-scale dataset for dense human pose estimation, and presents DensePose-RCNN, a deep learning model that can densely regress part-specific UV coordinates within human regions at multiple frames per second. The authors first gather dense correspondences for 50K persons in the COCO dataset through an efficient annotation pipeline. They then use this dataset to train CNN-based systems that deliver dense correspondence in real-world scenarios, including background, occlusions, and scale variations. To improve the effectiveness of the training set, they train an 'inpainting' network to fill in missing ground truth values. The paper compares fully-convolutional networks and region-based models, finding that the latter is superior. They further enhance accuracy through cascading, achieving highly accurate results in real-time. The supplementary materials and videos are available on the project page <http://densepose.org>.
Reach us at info@study.space