Point-SAM: Promptable 3D Segmentation Model for Point Clouds

Point-SAM: Promptable 3D Segmentation Model for Point Clouds

25 Jun 2024 | Yuchen Zhou 1*, Jiayuan Gu 1*, Tung Yen Chiang 1, Fanbo Xiang 1, Hao Su 1,2
The paper introduces Point-SAM, a 3D promptable segmentation model for point clouds, which extends the successful 2D foundation model, SAM, to the 3D domain. Point-SAM addresses the challenges of non-unified data formats, lightweight models, and limited labeled data in 3D by leveraging part-level and object-level annotations and a data engine to generate pseudo labels from SAM. The model is trained on a mixture of heterogeneous datasets, including PartNet and ScanNet, and demonstrates superior performance on various indoor and outdoor benchmarks. Key contributions include the development of Point-SAM, a data engine for generating diverse pseudo labels, and the model's strong zero-shot transferability to unseen point-cloud distributions and new tasks. The paper also discusses related work, training details, and experimental results, highlighting the model's effectiveness in zero-shot point-prompted segmentation, zero-shot object proposals, and few-shot part segmentation.The paper introduces Point-SAM, a 3D promptable segmentation model for point clouds, which extends the successful 2D foundation model, SAM, to the 3D domain. Point-SAM addresses the challenges of non-unified data formats, lightweight models, and limited labeled data in 3D by leveraging part-level and object-level annotations and a data engine to generate pseudo labels from SAM. The model is trained on a mixture of heterogeneous datasets, including PartNet and ScanNet, and demonstrates superior performance on various indoor and outdoor benchmarks. Key contributions include the development of Point-SAM, a data engine for generating diverse pseudo labels, and the model's strong zero-shot transferability to unseen point-cloud distributions and new tasks. The paper also discusses related work, training details, and experimental results, highlighting the model's effectiveness in zero-shot point-prompted segmentation, zero-shot object proposals, and few-shot part segmentation.
Reach us at info@study.space