AToM: Amortized Text-to-Mesh using 2D Diffusion

AToM: Amortized Text-to-Mesh using 2D Diffusion

1 Feb 2024 | Guocheng Qian, Junli Cao, Aliaksandr Siarohin, Yash Kant, Chaoyang Wang, Michael Vasilkovsky, Hsin-Ying Lee, Yuwei Fang, Ivan Skorokhodov, Peiye Zhuang, Igor Gilitschenski, Jian Ren, Bernard Ghanem, Kfir Aberman, Sergey Tulyakov
The paper introduces Amortized Text-to-Mesh (AToM), a feed-forward text-to-mesh framework that generates textured meshes from text prompts in less than 1 second. AToM is optimized across multiple text prompts simultaneously, reducing training costs by about 10 times compared to per-prompt optimization. The key innovation is a triplane-based text-to-mesh architecture with a two-stage amortized optimization strategy, ensuring stable training and enabling scalability. Extensive experiments on various benchmarks show that AToM outperforms state-of-the-art amortized approaches by over 4 times in accuracy (on the DF415 dataset) and produces more distinguishable and higher-quality 3D outputs. AToM also demonstrates strong generalizability, generating fine-grained 3D assets for unseen prompts without further optimization during inference. The paper discusses the challenges in unstable training and proposes solutions to address them, including a text-to-triplane network and a two-stage optimization process.The paper introduces Amortized Text-to-Mesh (AToM), a feed-forward text-to-mesh framework that generates textured meshes from text prompts in less than 1 second. AToM is optimized across multiple text prompts simultaneously, reducing training costs by about 10 times compared to per-prompt optimization. The key innovation is a triplane-based text-to-mesh architecture with a two-stage amortized optimization strategy, ensuring stable training and enabling scalability. Extensive experiments on various benchmarks show that AToM outperforms state-of-the-art amortized approaches by over 4 times in accuracy (on the DF415 dataset) and produces more distinguishable and higher-quality 3D outputs. AToM also demonstrates strong generalizability, generating fine-grained 3D assets for unseen prompts without further optimization during inference. The paper discusses the challenges in unstable training and proposes solutions to address them, including a text-to-triplane network and a two-stage optimization process.
Reach us at info@study.space