Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines

Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines

Vol. 38, No. 4, Article 29 (July 2019) | BEN MILDENHALL*, University of California, Berkeley, PRATUL P. SRINIVASAN*, University of California, Berkeley, RODRIGO ORTIZ-CAYON, Fyusion Inc., NIMA KHADEMI KALANTARI, Texas A&M University, RAVI RAMAMOORTHI, University of California, San Diego, REN NG, University of California, Berkeley, ABHISHEK KAR, Fyusion Inc.
The paper presents a practical and robust method for view synthesis from a set of input images captured by a handheld camera in an irregular grid pattern. The method uses a deep learning pipeline to promote each input view to a layered scene representation, which can then be used to render novel views by blending adjacent local light fields. The key contributions include: 1. **Prescriptive Sampling Guidelines**: The method provides theoretical and empirical evidence that the required number of input views can be significantly reduced, achieving high-fidelity view synthesis with up to 4000 times fewer views compared to Nyquist rate sampling. 2. **Practical Implementation**: A deep learning pipeline is proposed to expand each input view into a multiplane image (MPI) scene representation, which can then be used to render novel views through blending. 3. **Performance and Ablation Studies**: The method outperforms state-of-the-art techniques in terms of perceptual quality, especially for non-Lambertian effects, and the effectiveness of the proposed approach is validated through various ablation studies. The paper also includes a smartphone app that guides users to capture input images, and mobile and desktop viewer apps that enable real-time virtual exploration of the synthesized views. The method is demonstrated to be effective across a diverse set of complex real-world scenes.The paper presents a practical and robust method for view synthesis from a set of input images captured by a handheld camera in an irregular grid pattern. The method uses a deep learning pipeline to promote each input view to a layered scene representation, which can then be used to render novel views by blending adjacent local light fields. The key contributions include: 1. **Prescriptive Sampling Guidelines**: The method provides theoretical and empirical evidence that the required number of input views can be significantly reduced, achieving high-fidelity view synthesis with up to 4000 times fewer views compared to Nyquist rate sampling. 2. **Practical Implementation**: A deep learning pipeline is proposed to expand each input view into a multiplane image (MPI) scene representation, which can then be used to render novel views through blending. 3. **Performance and Ablation Studies**: The method outperforms state-of-the-art techniques in terms of perceptual quality, especially for non-Lambertian effects, and the effectiveness of the proposed approach is validated through various ablation studies. The paper also includes a smartphone app that guides users to capture input images, and mobile and desktop viewer apps that enable real-time virtual exploration of the synthesized views. The method is demonstrated to be effective across a diverse set of complex real-world scenes.
Reach us at info@study.space