IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination

IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination

22 Apr 2024 | Xi Chen, Sida Peng, Dongchen Yang, Yuan Liu, Bowen Pan, Chengfei Lv, Xiaowei Zhou
This paper addresses the challenge of recovering object materials from posed images under unknown static lighting conditions. Traditional methods often struggle due to the coupling between object geometry, materials, and environment lighting, leading to ambiguous results. To overcome this, the authors propose learning material priors using generative models, specifically diffusion models, to regularize the optimization process. They split the general rendering equation into diffuse and specular shading terms, formulating the material prior as diffusion models of albedo and specular. This approach allows the model to be trained using abundant 3D object data and acts as a versatile tool to resolve ambiguities in material recovery. Additionally, a coarse-to-fine training strategy is developed to leverage estimated materials to guide the diffusion models, ensuring multi-view consistency and improving the accuracy of material and lighting recovery. Extensive experiments on real-world and synthetic datasets demonstrate the effectiveness of the proposed method, achieving state-of-the-art performance in material recovery. The code is available at https://zju3dv.github.io/IntrinsicAnything/.This paper addresses the challenge of recovering object materials from posed images under unknown static lighting conditions. Traditional methods often struggle due to the coupling between object geometry, materials, and environment lighting, leading to ambiguous results. To overcome this, the authors propose learning material priors using generative models, specifically diffusion models, to regularize the optimization process. They split the general rendering equation into diffuse and specular shading terms, formulating the material prior as diffusion models of albedo and specular. This approach allows the model to be trained using abundant 3D object data and acts as a versatile tool to resolve ambiguities in material recovery. Additionally, a coarse-to-fine training strategy is developed to leverage estimated materials to guide the diffusion models, ensuring multi-view consistency and improving the accuracy of material and lighting recovery. Extensive experiments on real-world and synthetic datasets demonstrate the effectiveness of the proposed method, achieving state-of-the-art performance in material recovery. The code is available at https://zju3dv.github.io/IntrinsicAnything/.
Reach us at info@study.space