24 May 2024 | ZHENNAN WU, YANG LI, HAN YAN, TAIZHANG SHANG, WEIXUAN SUN, SENBO WANG, RUIKAI CUI, WEIZHE LIU, HIROYUKI SATO, HONGDONG LI, PAN JI
BlockFusion is a diffusion-based model that generates 3D scenes as unit blocks and seamlessly incorporates new blocks to extend the scene. It is trained using datasets of 3D blocks randomly cropped from complete 3D scene meshes. The model converts all training blocks into hybrid neural fields, consisting of a tri-plane containing geometry features and an MLP for decoding signed distance values. A variational auto-encoder compresses the tri-planes into a latent tri-plane space, on which the denoising diffusion process is performed. This allows for high-quality and diverse 3D scene generation.
To expand a scene, empty blocks are appended to overlap with the current scene, and existing latent triplanes are extrapolated to populate new blocks. The extrapolation is done by conditioning the generation process with feature samples from overlapping tri-planes during denoising iterations. This produces semantically and geometrically meaningful transitions that harmoniously blend with the existing scene. A 2D layout conditioning mechanism is used to control the placement and arrangement of scene elements. Experimental results show that BlockFusion can generate diverse, geometrically consistent, and unbounded large 3D scenes with unprecedented high-quality shapes in both indoor and outdoor scenarios.
BlockFusion presents a generalizable, high-quality 3D generation model based on latent tri-plane diffusion, a latent tri-plane extrapolation mechanism for harmonious scene expansion, and a 2D layout conditioning mechanism for precise control over scene generation. It is capable of generating diverse, geometrically consistent, and unbounded large 3D scenes with unprecedented high-quality shapes in both indoor and outdoor scenarios.BlockFusion is a diffusion-based model that generates 3D scenes as unit blocks and seamlessly incorporates new blocks to extend the scene. It is trained using datasets of 3D blocks randomly cropped from complete 3D scene meshes. The model converts all training blocks into hybrid neural fields, consisting of a tri-plane containing geometry features and an MLP for decoding signed distance values. A variational auto-encoder compresses the tri-planes into a latent tri-plane space, on which the denoising diffusion process is performed. This allows for high-quality and diverse 3D scene generation.
To expand a scene, empty blocks are appended to overlap with the current scene, and existing latent triplanes are extrapolated to populate new blocks. The extrapolation is done by conditioning the generation process with feature samples from overlapping tri-planes during denoising iterations. This produces semantically and geometrically meaningful transitions that harmoniously blend with the existing scene. A 2D layout conditioning mechanism is used to control the placement and arrangement of scene elements. Experimental results show that BlockFusion can generate diverse, geometrically consistent, and unbounded large 3D scenes with unprecedented high-quality shapes in both indoor and outdoor scenarios.
BlockFusion presents a generalizable, high-quality 3D generation model based on latent tri-plane diffusion, a latent tri-plane extrapolation mechanism for harmonious scene expansion, and a 2D layout conditioning mechanism for precise control over scene generation. It is capable of generating diverse, geometrically consistent, and unbounded large 3D scenes with unprecedented high-quality shapes in both indoor and outdoor scenarios.