Disentangled 3D Scene Generation with Layout Learning

Disentangled 3D Scene Generation with Layout Learning

26 Feb 2024 | Dave Epstein, Ben Poole, Ben Mildenhall, Alexei A. Efros, Aleksander Holynski
The paper introduces a method to generate 3D scenes that are disentangled into their component objects, using only the knowledge of a large pretrained text-to-image model. The key insight is that objects can be discovered by finding parts of a 3D scene that, when rearranged spatially, still produce valid configurations of the same scene. The method jointly optimizes multiple NeRFs ( Neural Radiance Fields) from scratch, each representing a different object, along with a set of layouts that composite these objects into scenes. The goal is to encourage these composited scenes to be in-distribution according to the image generator. The approach is shown to successfully generate 3D scenes decomposed into individual objects, enabling new capabilities in text-to-3D content creation. The paper includes an interactive demo and demonstrates the utility of layout learning on various tasks, such as building scenes around specific assets, sampling different arrangements for a set of assets, and parsing a provided NeRF into its constituent objects, all without additional supervision beyond text prompts. The method is evaluated through ablation studies and compared to baselines, showing competitive performance in object disentanglement and scene quality.The paper introduces a method to generate 3D scenes that are disentangled into their component objects, using only the knowledge of a large pretrained text-to-image model. The key insight is that objects can be discovered by finding parts of a 3D scene that, when rearranged spatially, still produce valid configurations of the same scene. The method jointly optimizes multiple NeRFs ( Neural Radiance Fields) from scratch, each representing a different object, along with a set of layouts that composite these objects into scenes. The goal is to encourage these composited scenes to be in-distribution according to the image generator. The approach is shown to successfully generate 3D scenes decomposed into individual objects, enabling new capabilities in text-to-3D content creation. The paper includes an interactive demo and demonstrates the utility of layout learning on various tasks, such as building scenes around specific assets, sampling different arrangements for a set of assets, and parsing a provided NeRF into its constituent objects, all without additional supervision beyond text prompts. The method is evaluated through ablation studies and compared to baselines, showing competitive performance in object disentanglement and scene quality.
Reach us at info@study.space
[slides and audio] Disentangled 3D Scene Generation with Layout Learning