V3D: Video Diffusion Models are Effective 3D Generators

V3D: Video Diffusion Models are Effective 3D Generators

11 Mar 2024 | Zilong Chen¹,², Yikai Wang¹†, Feng Wang¹, Zhengyi Wang¹,², and Huaping Liu¹†
V3D is a novel approach for 3D generation using video diffusion models. The method leverages pre-trained video diffusion models to generate consistent multi-view images and reconstruct 3D assets. By fine-tuning the video diffusion model on 3D datasets, V3D can generate high-quality 3D objects within 3 minutes. The approach introduces geometrical consistency prior to enhance the diffusion model's ability to generate multi-view consistent 3D assets. V3D can be extended to scene-level novel view synthesis, achieving precise control over camera paths with sparse input views. The method uses a tailored reconstruction pipeline to generate high-quality 3D Gaussians or textured meshes. The approach is validated through extensive experiments, demonstrating superior performance in terms of generation quality and multi-view consistency. V3D is effective for both object-centric and scene-level 3D generation, and the code is available at https://github.com/heheyas/V3D.V3D is a novel approach for 3D generation using video diffusion models. The method leverages pre-trained video diffusion models to generate consistent multi-view images and reconstruct 3D assets. By fine-tuning the video diffusion model on 3D datasets, V3D can generate high-quality 3D objects within 3 minutes. The approach introduces geometrical consistency prior to enhance the diffusion model's ability to generate multi-view consistent 3D assets. V3D can be extended to scene-level novel view synthesis, achieving precise control over camera paths with sparse input views. The method uses a tailored reconstruction pipeline to generate high-quality 3D Gaussians or textured meshes. The approach is validated through extensive experiments, demonstrating superior performance in terms of generation quality and multi-view consistency. V3D is effective for both object-centric and scene-level 3D generation, and the code is available at https://github.com/heheyas/V3D.
Reach us at info@study.space
[slides] V3D%3A Video Diffusion Models are Effective 3D Generators | StudySpace