Deep Learning with Limited Numerical Precision

Deep Learning with Limited Numerical Precision

9 Feb 2015 | Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan
This paper explores the impact of limited numerical precision on the training of large-scale deep neural networks, focusing on low-precision fixed-point arithmetic and stochastic rounding. The authors demonstrate that deep networks can be effectively trained using 16-bit fixed-point numbers with stochastic rounding, achieving similar classification accuracy to 32-bit floating-point computations. They also present a hardware accelerator designed for low-precision fixed-point arithmetic, which achieves high throughput and low power consumption. The study highlights the potential of leveraging algorithm-level noise-tolerance to optimize hardware and software systems for deep learning, particularly in terms of computational performance and energy efficiency.This paper explores the impact of limited numerical precision on the training of large-scale deep neural networks, focusing on low-precision fixed-point arithmetic and stochastic rounding. The authors demonstrate that deep networks can be effectively trained using 16-bit fixed-point numbers with stochastic rounding, achieving similar classification accuracy to 32-bit floating-point computations. They also present a hardware accelerator designed for low-precision fixed-point arithmetic, which achieves high throughput and low power consumption. The study highlights the potential of leveraging algorithm-level noise-tolerance to optimize hardware and software systems for deep learning, particularly in terms of computational performance and energy efficiency.
Reach us at info@study.space