AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation

AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation

20 Jul 2018 | Thibault Groueix1*, Matthew Fisher2, Vladimir G. Kim2, Bryan C. Russell2, Mathieu Aubry1
AtlasNet is a novel approach for learning to generate 3D surfaces from 2D images or 3D point clouds. The method represents a 3D shape as a collection of parametric surface elements, allowing for the generation of a surface mesh and its atlas parameterization. Unlike traditional methods that generate voxel grids or point clouds, AtlasNet naturally infers a surface representation of the shape. The method offers advantages such as improved precision, generalization, and the ability to generate high-resolution surfaces without memory issues. It has been tested on the ShapeNet benchmark for two applications: autoencoding shapes and single-view reconstruction from images. The results show that AtlasNet outperforms existing baselines in both tasks. The method also demonstrates potential for other applications such as shape interpolation, parameterization, super-resolution, matching, and co-segmentation. AtlasNet is trained using a neural network that learns to map 2D squares to the surface of a 3D shape, allowing for the generation of a continuous image of a planar surface. The method generates a UV parameterization for each 3D point, enabling the transfer of a tessellation or texture map to the generated surface. The approach is compared to other methods such as PointSetGen, 3D-R2N2, and HSP, and shows superior performance in terms of reconstruction quality and detail preservation. The method is also effective for generating high-resolution meshes and can be used for 3D printing. The results demonstrate that AtlasNet is capable of generating high-quality 3D surfaces from 2D images or point clouds, with applications in shape reconstruction, mesh generation, and surface parameterization.AtlasNet is a novel approach for learning to generate 3D surfaces from 2D images or 3D point clouds. The method represents a 3D shape as a collection of parametric surface elements, allowing for the generation of a surface mesh and its atlas parameterization. Unlike traditional methods that generate voxel grids or point clouds, AtlasNet naturally infers a surface representation of the shape. The method offers advantages such as improved precision, generalization, and the ability to generate high-resolution surfaces without memory issues. It has been tested on the ShapeNet benchmark for two applications: autoencoding shapes and single-view reconstruction from images. The results show that AtlasNet outperforms existing baselines in both tasks. The method also demonstrates potential for other applications such as shape interpolation, parameterization, super-resolution, matching, and co-segmentation. AtlasNet is trained using a neural network that learns to map 2D squares to the surface of a 3D shape, allowing for the generation of a continuous image of a planar surface. The method generates a UV parameterization for each 3D point, enabling the transfer of a tessellation or texture map to the generated surface. The approach is compared to other methods such as PointSetGen, 3D-R2N2, and HSP, and shows superior performance in terms of reconstruction quality and detail preservation. The method is also effective for generating high-resolution meshes and can be used for 3D printing. The results demonstrate that AtlasNet is capable of generating high-quality 3D surfaces from 2D images or point clouds, with applications in shape reconstruction, mesh generation, and surface parameterization.
Reach us at info@study.space