11 Jun 2024 | Haian Jin, Yuan Li, Fujun Luan, Yuanbo Xiangli, Sai Bi, Kai Zhang, Zexiang Xu, Noah Snavely
Neural Gaffer is a novel end-to-end 2D relighting diffusion model that enables accurate and high-quality relighting of any object from a single image under any environmental lighting condition. The model is trained on a large synthetic relighting dataset and can be conditioned on an HDR environment map or text description of the target lighting. It does not require explicit scene decomposition and leverages the inherent understanding of lighting in diffusion models. Neural Gaffer can be used for various 2D tasks, such as text-based relighting and object insertion, and serves as a strong relighting prior for 3D tasks, such as relighting a radiance field. The model is evaluated on both synthetic and real-world data, demonstrating superior generalization and accuracy. It also enables a two-stage 3D relighting pipeline, where the first stage generates coarse relighting predictions and the second stage refines the appearance details. The model outperforms existing methods in terms of lighting accuracy and realism, and is able to handle a wide range of lighting conditions and object categories. Neural Gaffer is a data-driven approach that addresses the longstanding challenge of single-image relighting by learning the complex interplay between geometry, materials, and lighting. The model is trained on a large dataset of synthetic objects and can be applied to real-world scenarios, making it a versatile tool for relighting tasks.Neural Gaffer is a novel end-to-end 2D relighting diffusion model that enables accurate and high-quality relighting of any object from a single image under any environmental lighting condition. The model is trained on a large synthetic relighting dataset and can be conditioned on an HDR environment map or text description of the target lighting. It does not require explicit scene decomposition and leverages the inherent understanding of lighting in diffusion models. Neural Gaffer can be used for various 2D tasks, such as text-based relighting and object insertion, and serves as a strong relighting prior for 3D tasks, such as relighting a radiance field. The model is evaluated on both synthetic and real-world data, demonstrating superior generalization and accuracy. It also enables a two-stage 3D relighting pipeline, where the first stage generates coarse relighting predictions and the second stage refines the appearance details. The model outperforms existing methods in terms of lighting accuracy and realism, and is able to handle a wide range of lighting conditions and object categories. Neural Gaffer is a data-driven approach that addresses the longstanding challenge of single-image relighting by learning the complex interplay between geometry, materials, and lighting. The model is trained on a large dataset of synthetic objects and can be applied to real-world scenarios, making it a versatile tool for relighting tasks.