Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

July 2022 | THOMAS MÜLLER, ALEX EVANS, CHRISTOPH SCHIED, ALEXANDER KELLER
This paper introduces a multiresolution hash encoding for neural graphics primitives, which enables efficient and high-quality training and rendering of various computer graphics tasks. The encoding is task-agnostic, using the same implementation and hyperparameters across all tasks, with only the hash table size varying to trade off quality and performance. The encoding is implemented using a multiresolution hash table of trainable feature vectors, which allows for efficient parallelization on modern GPUs. The hash tables are designed to disambiguate hash collisions, leading to a simple architecture that is easy to parallelize. The system is implemented using fully-fused CUDA kernels to minimize wasted bandwidth and compute operations, achieving a combined speedup of several orders of magnitude. This enables training of high-quality neural graphics primitives in seconds and rendering in tens of milliseconds at a resolution of 1920 × 1080. The encoding is validated in four representative tasks: gigapixel image, neural signed distance functions (SDF), neural radiance caching (NRC), and neural radiance and density fields (NeRF). The encoding outperforms other encodings in terms of reconstruction quality and training speed, with the best results achieved when using a larger hash table size. The encoding is also shown to be efficient in terms of memory usage and computation, with the performance vs. quality trade-off being controlled by the hash table size. The encoding is also shown to be effective in handling hash collisions, with the neural network learning to disambiguate them, leading to improved performance and reduced implementation complexity. The encoding is implemented in CUDA and integrated with the fast fully-fused MLPs of the tiny-cuda-nn framework. The encoding is shown to be effective in a variety of tasks, including high-resolution image reconstruction, SDF learning, NRC, and NeRF. The encoding is also shown to be efficient in terms of memory usage and computation, with the performance vs. quality trade-off being controlled by the hash table size. The encoding is shown to be effective in handling hash collisions, with the neural network learning to disambiguate them, leading to improved performance and reduced implementation complexity. The encoding is implemented in CUDA and integrated with the fast fully-fused MLPs of the tiny-cuda-nn framework. The encoding is shown to be effective in a variety of tasks, including high-resolution image reconstruction, SDF learning, NRC, and NeRF.This paper introduces a multiresolution hash encoding for neural graphics primitives, which enables efficient and high-quality training and rendering of various computer graphics tasks. The encoding is task-agnostic, using the same implementation and hyperparameters across all tasks, with only the hash table size varying to trade off quality and performance. The encoding is implemented using a multiresolution hash table of trainable feature vectors, which allows for efficient parallelization on modern GPUs. The hash tables are designed to disambiguate hash collisions, leading to a simple architecture that is easy to parallelize. The system is implemented using fully-fused CUDA kernels to minimize wasted bandwidth and compute operations, achieving a combined speedup of several orders of magnitude. This enables training of high-quality neural graphics primitives in seconds and rendering in tens of milliseconds at a resolution of 1920 × 1080. The encoding is validated in four representative tasks: gigapixel image, neural signed distance functions (SDF), neural radiance caching (NRC), and neural radiance and density fields (NeRF). The encoding outperforms other encodings in terms of reconstruction quality and training speed, with the best results achieved when using a larger hash table size. The encoding is also shown to be efficient in terms of memory usage and computation, with the performance vs. quality trade-off being controlled by the hash table size. The encoding is also shown to be effective in handling hash collisions, with the neural network learning to disambiguate them, leading to improved performance and reduced implementation complexity. The encoding is implemented in CUDA and integrated with the fast fully-fused MLPs of the tiny-cuda-nn framework. The encoding is shown to be effective in a variety of tasks, including high-resolution image reconstruction, SDF learning, NRC, and NeRF. The encoding is also shown to be efficient in terms of memory usage and computation, with the performance vs. quality trade-off being controlled by the hash table size. The encoding is shown to be effective in handling hash collisions, with the neural network learning to disambiguate them, leading to improved performance and reduced implementation complexity. The encoding is implemented in CUDA and integrated with the fast fully-fused MLPs of the tiny-cuda-nn framework. The encoding is shown to be effective in a variety of tasks, including high-resolution image reconstruction, SDF learning, NRC, and NeRF.
Reach us at info@study.space
[slides and audio] Instant neural graphics primitives with a multiresolution hash encoding