24 May 2024 | ZHENNAN WU, The University of Tokyo, Japan; YANG LI*, Tencent XR Vision Labs, China; HAN YAN†, Shanghai Jiao Tong University, China; TAIZHANG SHANG, Tencent XR Vision Labs, China; WEIXUAN SUN, Tencent XR Vision Labs XR, China; SENBO WANG, Tencent XR Vision Labs, China; RUIKAI CUI, ANU, Australia; WEIZHE LIU, Tencent XR Vision Labs, China; HIROYUKI SATO, The University of Tokyo, Japan; HONGDONG LI, ANU, Australia; PAN JI, Tencent XR Vision Labs, China
BlockFusion is a diffusion-based model designed to generate 3D scenes as unit blocks and seamlessly extend these scenes by incorporating new blocks. The model is trained using datasets of 3D blocks randomly cropped from complete 3D scene meshes. Through per-block fitting, all training blocks are converted into hybrid neural fields, which consist of a tri-plane containing geometry features and a Multi-layer Perceptron (MLP) for decoding signed distance values. A variational auto-encoder compresses the tri-planes into a latent tri-plane space, where the denoising diffusion process is performed. This allows for high-quality and diverse 3D scene generation.
To expand a scene during generation, empty blocks are appended to overlap with the current scene, and existing latent tri-planes are extrapolated to populate the new blocks. The extrapolation is done by conditioning the generation process with feature samples from overlapping tri-planes during denoising iterations. This produces semantically and geometrically meaningful transitions that blend harmoniously with the existing scene. A 2D layout conditioning mechanism is introduced to control the placement and arrangement of scene elements, providing users with more control over the generation process.
Experimental results demonstrate that BlockFusion can generate diverse, geometrically consistent, and unbounded large 3D scenes with high-quality shapes in both indoor and outdoor scenarios. The method addresses the challenges of generating high-fidelity 3D shapes at the scene level and expanding scenes in a coherent manner.BlockFusion is a diffusion-based model designed to generate 3D scenes as unit blocks and seamlessly extend these scenes by incorporating new blocks. The model is trained using datasets of 3D blocks randomly cropped from complete 3D scene meshes. Through per-block fitting, all training blocks are converted into hybrid neural fields, which consist of a tri-plane containing geometry features and a Multi-layer Perceptron (MLP) for decoding signed distance values. A variational auto-encoder compresses the tri-planes into a latent tri-plane space, where the denoising diffusion process is performed. This allows for high-quality and diverse 3D scene generation.
To expand a scene during generation, empty blocks are appended to overlap with the current scene, and existing latent tri-planes are extrapolated to populate the new blocks. The extrapolation is done by conditioning the generation process with feature samples from overlapping tri-planes during denoising iterations. This produces semantically and geometrically meaningful transitions that blend harmoniously with the existing scene. A 2D layout conditioning mechanism is introduced to control the placement and arrangement of scene elements, providing users with more control over the generation process.
Experimental results demonstrate that BlockFusion can generate diverse, geometrically consistent, and unbounded large 3D scenes with high-quality shapes in both indoor and outdoor scenarios. The method addresses the challenges of generating high-fidelity 3D shapes at the scene level and expanding scenes in a coherent manner.