AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation

AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation

20 Jul 2018 | Thibault Groueix1*, Matthew Fisher2, Vladimir G. Kim2, Bryan C. Russell2, Mathieu Aubry1
**AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation** **Authors:** Thibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell, Mathieu Aubry **Abstract:** This paper introduces AtlasNet, a novel method for generating 3D surfaces from 2D images or 3D point clouds. Unlike existing methods that generate voxel grids or point clouds, AtlasNet represents a 3D shape as a collection of parametric surface elements, naturally inferring a surface representation. The key strength of AtlasNet is its ability to jointly learn a parameterization and an embedding of a shape, allowing for improved precision, generalization, and the generation of arbitrary-resolution meshes without memory issues. The method is evaluated on the ShapeNet benchmark for two applications: auto-encoding shapes and single-view reconstruction from still images. Results demonstrate the advantages of AtlasNet over strong baselines and show its potential for other applications such as shape interpolation, parameterization, super-resolution, matching, and co-segmentation. **Introduction:** The paper addresses the challenge of learning representations for generating high-resolution 3D shapes. It compares volumetric and point cloud representations, highlighting their limitations in terms of smoothness, connectivity, and memory efficiency. AtlasNet proposes a novel approach that learns a surface representation directly, inspired by the topological definition of a surface. The method uses multiple learnable parametrizations to cover the surface, similar to placing strips of paper on a shape, ensuring a continuous and smooth 2-manifold structure. The learned transformations map 2D squares to the 3D surface, allowing for texture mapping and meshing. **Related Work:** The paper discusses existing methods for 2-manifold representation and 3D shape generation, including polygon meshes, geometry images, and volumetric representations. It also reviews deep learning approaches for 3D shape generation, such as PointNet and octree-based methods. **AtlasNet:** AtlasNet decodes a 3D surface from a 2D input, using a multi-layer perceptron (MLP) with ReLU nonlinearities to map points in the unit square to surface points in 3D. The model is trained to minimize the Chamfer distance between the generated points and the target surface, incorporating shape features as additional input. **Results:** The paper evaluates AtlasNet on the ShapeNet dataset, comparing it to baselines in terms of Chamfer distance and Metro distance. AtlasNet outperforms the baseline in both metrics, demonstrating its effectiveness in auto-encoding and single-view reconstruction. Additional applications, such as shape interpolation, parameterization, and mesh generation, are also explored. **Conclusion:** AtlasNet provides a novel approach to 3D surface generation, offering improved precision, generalization, and the ability to generate high-resolution meshes. The method has potential for various applications in 3D shape analysis and synthesis.**AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation** **Authors:** Thibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell, Mathieu Aubry **Abstract:** This paper introduces AtlasNet, a novel method for generating 3D surfaces from 2D images or 3D point clouds. Unlike existing methods that generate voxel grids or point clouds, AtlasNet represents a 3D shape as a collection of parametric surface elements, naturally inferring a surface representation. The key strength of AtlasNet is its ability to jointly learn a parameterization and an embedding of a shape, allowing for improved precision, generalization, and the generation of arbitrary-resolution meshes without memory issues. The method is evaluated on the ShapeNet benchmark for two applications: auto-encoding shapes and single-view reconstruction from still images. Results demonstrate the advantages of AtlasNet over strong baselines and show its potential for other applications such as shape interpolation, parameterization, super-resolution, matching, and co-segmentation. **Introduction:** The paper addresses the challenge of learning representations for generating high-resolution 3D shapes. It compares volumetric and point cloud representations, highlighting their limitations in terms of smoothness, connectivity, and memory efficiency. AtlasNet proposes a novel approach that learns a surface representation directly, inspired by the topological definition of a surface. The method uses multiple learnable parametrizations to cover the surface, similar to placing strips of paper on a shape, ensuring a continuous and smooth 2-manifold structure. The learned transformations map 2D squares to the 3D surface, allowing for texture mapping and meshing. **Related Work:** The paper discusses existing methods for 2-manifold representation and 3D shape generation, including polygon meshes, geometry images, and volumetric representations. It also reviews deep learning approaches for 3D shape generation, such as PointNet and octree-based methods. **AtlasNet:** AtlasNet decodes a 3D surface from a 2D input, using a multi-layer perceptron (MLP) with ReLU nonlinearities to map points in the unit square to surface points in 3D. The model is trained to minimize the Chamfer distance between the generated points and the target surface, incorporating shape features as additional input. **Results:** The paper evaluates AtlasNet on the ShapeNet dataset, comparing it to baselines in terms of Chamfer distance and Metro distance. AtlasNet outperforms the baseline in both metrics, demonstrating its effectiveness in auto-encoding and single-view reconstruction. Additional applications, such as shape interpolation, parameterization, and mesh generation, are also explored. **Conclusion:** AtlasNet provides a novel approach to 3D surface generation, offering improved precision, generalization, and the ability to generate high-resolution meshes. The method has potential for various applications in 3D shape analysis and synthesis.
Reach us at info@study.space