Mitigating Motion Blur in Neural Radiance Fields with Events and Frames

Mitigating Motion Blur in Neural Radiance Fields with Events and Frames

2024 | Marco Cannici and Davide Scaramuzza
This paper proposes Ev-DeblurNeRF, a novel neural radiance field (NeRF) method that combines event-based and frame-based data to recover sharp NeRFs from motion-blurred images. The method addresses the limitations of existing NeRF approaches that struggle with motion blur by integrating both model-based and learning-based components. It explicitly models the blur formation process and leverages the event double integral (EDI) as an additional model-based prior. Additionally, it introduces an end-to-end learnable event-pixel response function to adapt to non-idealities in real event cameras. The method outperforms existing deblur NeRFs that use only frames or combine frames and events by +6.13dB and +2.48dB, respectively, on synthetic and real data. Ev-DeblurNeRF is evaluated on a novel event-based version of the Deblur-NeRF synthetic dataset and a new dataset collected using a Color DAVIS event-based camera. The method achieves +6.13dB more accurate radiance fields than image-only baselines and +2.48dB more accurate results than NeRFs that use both images and events on real data. The contributions include a novel approach for recovering sharp NeRFs in the presence of motion blur, a NeRF formulation that is +2.48dB more accurate and 6.9× faster to train than previous event-based deblur NeRF methods, and two new datasets featuring precise ground truth poses for accurate quality assessment. The method is designed to handle motion blur by incorporating event-based supervision, which provides high temporal resolution and is only marginally affected by blur. It uses event-based single and double integrals to model the relationship between events and the resulting blurry frames. The method also introduces a learnable event-based camera response function to adapt to real event camera data. The architecture is based on prior works and includes a neural module for estimating motion implicitly. The method uses volumetric rendering to estimate colors at predicted camera positions and fuses them into a blurry observation. The method is validated on both synthetic and real data, showing significant improvements in PSNR, LPIPS, and SSIM metrics compared to existing baselines. It demonstrates robustness to sparse training views and motion blur intensity, achieving higher performance even when training data is limited. The method also shows improved performance in scenarios with consistent blur, where existing image-only baselines struggle. The method is implemented using PyTorch and incorporates additional features from PDRF and TensoRF to enhance training efficiency. The method is trained on a variety of hardware, including NVIDIA V100, RTX A6000, and A100 GPUs, and achieves faster convergence with explicit features. The method is also evaluated on real-world data, showing comparable performance to motor encoder poses when using COLMAP poses. The method is shown to be robust to model mismatches and can handle a wide range of motionThis paper proposes Ev-DeblurNeRF, a novel neural radiance field (NeRF) method that combines event-based and frame-based data to recover sharp NeRFs from motion-blurred images. The method addresses the limitations of existing NeRF approaches that struggle with motion blur by integrating both model-based and learning-based components. It explicitly models the blur formation process and leverages the event double integral (EDI) as an additional model-based prior. Additionally, it introduces an end-to-end learnable event-pixel response function to adapt to non-idealities in real event cameras. The method outperforms existing deblur NeRFs that use only frames or combine frames and events by +6.13dB and +2.48dB, respectively, on synthetic and real data. Ev-DeblurNeRF is evaluated on a novel event-based version of the Deblur-NeRF synthetic dataset and a new dataset collected using a Color DAVIS event-based camera. The method achieves +6.13dB more accurate radiance fields than image-only baselines and +2.48dB more accurate results than NeRFs that use both images and events on real data. The contributions include a novel approach for recovering sharp NeRFs in the presence of motion blur, a NeRF formulation that is +2.48dB more accurate and 6.9× faster to train than previous event-based deblur NeRF methods, and two new datasets featuring precise ground truth poses for accurate quality assessment. The method is designed to handle motion blur by incorporating event-based supervision, which provides high temporal resolution and is only marginally affected by blur. It uses event-based single and double integrals to model the relationship between events and the resulting blurry frames. The method also introduces a learnable event-based camera response function to adapt to real event camera data. The architecture is based on prior works and includes a neural module for estimating motion implicitly. The method uses volumetric rendering to estimate colors at predicted camera positions and fuses them into a blurry observation. The method is validated on both synthetic and real data, showing significant improvements in PSNR, LPIPS, and SSIM metrics compared to existing baselines. It demonstrates robustness to sparse training views and motion blur intensity, achieving higher performance even when training data is limited. The method also shows improved performance in scenarios with consistent blur, where existing image-only baselines struggle. The method is implemented using PyTorch and incorporates additional features from PDRF and TensoRF to enhance training efficiency. The method is trained on a variety of hardware, including NVIDIA V100, RTX A6000, and A100 GPUs, and achieves faster convergence with explicit features. The method is also evaluated on real-world data, showing comparable performance to motor encoder poses when using COLMAP poses. The method is shown to be robust to model mismatches and can handle a wide range of motion
Reach us at info@study.space