Part123: Part-aware 3D Reconstruction from a Single-view Image

Part123: Part-aware 3D Reconstruction from a Single-view Image

27 May 2024 | Anran Liu, Cheng Lin, Yuan Liu, Xiaoxiao Long, Zhiyang Dou, Hao-Xiang Guo, Ping Luo, Wenping Wang
Part123 is a novel framework for part-aware 3D reconstruction from a single-view image. The method generates multiview-consistent images using diffusion models and leverages the Segment Anything Model (SAM) to obtain 2D segmentation masks. These masks are then used in a neural rendering framework to learn a part-aware feature space through contrastive learning. A clustering-based algorithm is developed to automatically derive 3D part segmentation results. The method produces high-quality 3D models with structurally meaningful part segmentation, which benefits various shape-processing tasks such as primitive-based reconstruction. Experiments show that Part123 outperforms existing methods in terms of part-aware reconstruction, achieving high-quality results on various objects. The method is effective in handling inconsistent 2D segmentations across views and automatically determines the number of parts for segmentation. Part123 integrates multiview generation with a 2D segmentation model, enabling seamless part-aware 3D reconstruction from a single-view image. The method demonstrates strong generalization ability and is applicable to a wide range of real-world scenarios. The framework is validated on the Google Scanned Objects dataset and real-world images, showing its effectiveness in generating high-quality part segments. The method also supports applications such as feature-preserving reconstruction, primitive-based reconstruction, and shape editing. Part123 is a significant advancement in single-view 3D reconstruction, offering a practical solution for part-aware 3D modeling.Part123 is a novel framework for part-aware 3D reconstruction from a single-view image. The method generates multiview-consistent images using diffusion models and leverages the Segment Anything Model (SAM) to obtain 2D segmentation masks. These masks are then used in a neural rendering framework to learn a part-aware feature space through contrastive learning. A clustering-based algorithm is developed to automatically derive 3D part segmentation results. The method produces high-quality 3D models with structurally meaningful part segmentation, which benefits various shape-processing tasks such as primitive-based reconstruction. Experiments show that Part123 outperforms existing methods in terms of part-aware reconstruction, achieving high-quality results on various objects. The method is effective in handling inconsistent 2D segmentations across views and automatically determines the number of parts for segmentation. Part123 integrates multiview generation with a 2D segmentation model, enabling seamless part-aware 3D reconstruction from a single-view image. The method demonstrates strong generalization ability and is applicable to a wide range of real-world scenarios. The framework is validated on the Google Scanned Objects dataset and real-world images, showing its effectiveness in generating high-quality part segments. The method also supports applications such as feature-preserving reconstruction, primitive-based reconstruction, and shape editing. Part123 is a significant advancement in single-view 3D reconstruction, offering a practical solution for part-aware 3D modeling.
Reach us at info@study.space