6 Jun 2024 | Yihang Chen, Qianyi Wu, Mehrtash Harandi, Jianfei Cai
This paper introduces a Context-based NeRF Compression (CNC) framework to reduce the storage size of Neural Radiance Fields (NeRF) while maintaining high fidelity and rendering speed. The CNC framework leverages efficient context models to compress the explicit feature embeddings used in NeRF, specifically focusing on level-wise and dimension-wise context dependencies. By exploiting hash collision and occupancy grids, the framework enhances the accuracy of context modeling. The proposed method achieves significant storage reductions, with a 100× reduction on the Synthetic-NeRF dataset and a 70× reduction on the Tanks and Temples dataset compared to the baseline Instant-NGP. Additionally, the method outperforms the state-of-the-art NeRF compression method BiRF, achieving an 86.7% and 82.3% storage reduction on the respective datasets. The CNC framework is implemented using a combination of entropy estimation and context models, which enable efficient compression by minimizing information entropy. The framework also incorporates a dimension-wise context model that leverages 3D voxel information to improve the accuracy of probability estimation. The results demonstrate that the CNC framework achieves superior rate-distortion (RD) performance, with significant improvements in both storage efficiency and rendering quality. The method is evaluated on two benchmark datasets and shows promising results in terms of compression efficiency and fidelity. The framework is designed to be storage-friendly and efficient, making it suitable for large-scale and dynamic NeRF applications.This paper introduces a Context-based NeRF Compression (CNC) framework to reduce the storage size of Neural Radiance Fields (NeRF) while maintaining high fidelity and rendering speed. The CNC framework leverages efficient context models to compress the explicit feature embeddings used in NeRF, specifically focusing on level-wise and dimension-wise context dependencies. By exploiting hash collision and occupancy grids, the framework enhances the accuracy of context modeling. The proposed method achieves significant storage reductions, with a 100× reduction on the Synthetic-NeRF dataset and a 70× reduction on the Tanks and Temples dataset compared to the baseline Instant-NGP. Additionally, the method outperforms the state-of-the-art NeRF compression method BiRF, achieving an 86.7% and 82.3% storage reduction on the respective datasets. The CNC framework is implemented using a combination of entropy estimation and context models, which enable efficient compression by minimizing information entropy. The framework also incorporates a dimension-wise context model that leverages 3D voxel information to improve the accuracy of probability estimation. The results demonstrate that the CNC framework achieves superior rate-distortion (RD) performance, with significant improvements in both storage efficiency and rendering quality. The method is evaluated on two benchmark datasets and shows promising results in terms of compression efficiency and fidelity. The framework is designed to be storage-friendly and efficient, making it suitable for large-scale and dynamic NeRF applications.