IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination

IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination

22 Apr 2024 | Xi Chen, Sida Peng, Dongchen Yang, Yuan Liu, Bowen Pan, Chengfei Lv, and Xiaowei Zhou
IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination This paper presents a method for recovering object materials from posed images captured under unknown static lighting conditions. The approach leverages diffusion models to learn material priors, which are used to regularize the inverse rendering process. The key idea is to split the general rendering equation into diffuse and specular shading terms, and formulate the material prior as diffusion models of albedo and specular. This allows the model to be trained using existing 3D object data and naturally resolves ambiguities in material recovery. A coarse-to-fine training strategy is developed to guide diffusion models to satisfy multi-view consistent constraints, leading to more stable and accurate results. Extensive experiments on real-world and synthetic datasets demonstrate that the method achieves state-of-the-art performance in material recovery. The method uses physically based rendering and neural scene representations to recover object materials from images captured under natural lighting. It exploits neural networks to represent object materials and geometry, and combines these with learnable lighting to synthesize images, which are compared with captured images to optimize the model parameters. The method addresses the inherent ambiguity in inverse rendering by learning a strong material and shading prior, which effectively tackles the ill-posed feature and improves decomposition of images into BRDFs and illuminations. The method uses diffusion models to model the material prior, which is designed as conditional diffusion models of albedo and specular shading. This enables direct supervision on material estimation. The method also employs a guided sampling strategy to ensure multi-view consistency in the generated samples. The method is trained on a dataset consisting of RGB, albedo, and specular images based on the Objaverse dataset. The method achieves state-of-the-art performance on several benchmarks and generalizes to internet images. The method is evaluated on synthetic and real-world data, showing significant improvements in material recovery compared to existing methods. The method is able to disentangle materials and lighting from images, and handles high-resolution images by using a strategy of cropping the image into smaller patches and leveraging diffusion posterior sampling. The method is also compared with other data-driven priors, showing that it has a better capacity to decouple shading from materials. The method is able to recover accurate materials and lighting, and performs well on real-world data. The method is able to handle indirect illumination and metallic materials, and is able to recover better specular shading compared to baseline methods. The method is able to achieve accurate material recovery under unknown static lighting conditions.IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination This paper presents a method for recovering object materials from posed images captured under unknown static lighting conditions. The approach leverages diffusion models to learn material priors, which are used to regularize the inverse rendering process. The key idea is to split the general rendering equation into diffuse and specular shading terms, and formulate the material prior as diffusion models of albedo and specular. This allows the model to be trained using existing 3D object data and naturally resolves ambiguities in material recovery. A coarse-to-fine training strategy is developed to guide diffusion models to satisfy multi-view consistent constraints, leading to more stable and accurate results. Extensive experiments on real-world and synthetic datasets demonstrate that the method achieves state-of-the-art performance in material recovery. The method uses physically based rendering and neural scene representations to recover object materials from images captured under natural lighting. It exploits neural networks to represent object materials and geometry, and combines these with learnable lighting to synthesize images, which are compared with captured images to optimize the model parameters. The method addresses the inherent ambiguity in inverse rendering by learning a strong material and shading prior, which effectively tackles the ill-posed feature and improves decomposition of images into BRDFs and illuminations. The method uses diffusion models to model the material prior, which is designed as conditional diffusion models of albedo and specular shading. This enables direct supervision on material estimation. The method also employs a guided sampling strategy to ensure multi-view consistency in the generated samples. The method is trained on a dataset consisting of RGB, albedo, and specular images based on the Objaverse dataset. The method achieves state-of-the-art performance on several benchmarks and generalizes to internet images. The method is evaluated on synthetic and real-world data, showing significant improvements in material recovery compared to existing methods. The method is able to disentangle materials and lighting from images, and handles high-resolution images by using a strategy of cropping the image into smaller patches and leveraging diffusion posterior sampling. The method is also compared with other data-driven priors, showing that it has a better capacity to decouple shading from materials. The method is able to recover accurate materials and lighting, and performs well on real-world data. The method is able to handle indirect illumination and metallic materials, and is able to recover better specular shading compared to baseline methods. The method is able to achieve accurate material recovery under unknown static lighting conditions.
Reach us at info@study.space