GAZELLE: A Low Latency Framework for Secure Neural Network Inference

GAZELLE: A Low Latency Framework for Secure Neural Network Inference

16 Jan 2018 | Chiraag Juvekar, Vinod Vaikuntanathan, Anantha Chandrakasan
GAZELLE is a low-latency framework designed for secure neural network inference, addressing the challenge of protecting client privacy while enabling efficient classification using a server-trained convolutional neural network (CNN). The framework combines homomorphic encryption and two-party computation techniques, such as garbled circuits, to achieve its goals. Key contributions include: 1. **Gazelle Homomorphic Layer**: This component provides fast algorithms for basic homomorphic operations like SIMD addition, SIMD multiplication, and ciphertext permutation, significantly improving the efficiency of homomorphic computations. 2. **Gazelle Linear Algebra Kernels**: These kernels optimize homomorphic matrix-vector multiplication and convolution routines, enabling efficient implementation of neural network layers. They leverage lattice-based homomorphic encryption schemes, which offer advantages over other schemes in terms of computational and communication complexity. 3. **Optimized Encryption Switching Protocols**: These protocols seamlessly switch between homomorphic and garbled circuit encodings, facilitating the secure execution of complete neural network inference. The performance of GAZELLE is evaluated on benchmark neural networks trained on the MNIST and CIFAR-10 datasets. Results show that GAZELLE outperforms existing systems like MiniONN and Chameleon by orders of magnitude in terms of online runtime and bandwidth efficiency. Specifically, GAZELLE achieves 20× and 30× improvements in online runtime compared to MiniONN, and 30× and 2.5× improvements compared to Chameleon, respectively. Additionally, GAZELLE hides more information about the neural network architecture and parameters, providing enhanced privacy guarantees.GAZELLE is a low-latency framework designed for secure neural network inference, addressing the challenge of protecting client privacy while enabling efficient classification using a server-trained convolutional neural network (CNN). The framework combines homomorphic encryption and two-party computation techniques, such as garbled circuits, to achieve its goals. Key contributions include: 1. **Gazelle Homomorphic Layer**: This component provides fast algorithms for basic homomorphic operations like SIMD addition, SIMD multiplication, and ciphertext permutation, significantly improving the efficiency of homomorphic computations. 2. **Gazelle Linear Algebra Kernels**: These kernels optimize homomorphic matrix-vector multiplication and convolution routines, enabling efficient implementation of neural network layers. They leverage lattice-based homomorphic encryption schemes, which offer advantages over other schemes in terms of computational and communication complexity. 3. **Optimized Encryption Switching Protocols**: These protocols seamlessly switch between homomorphic and garbled circuit encodings, facilitating the secure execution of complete neural network inference. The performance of GAZELLE is evaluated on benchmark neural networks trained on the MNIST and CIFAR-10 datasets. Results show that GAZELLE outperforms existing systems like MiniONN and Chameleon by orders of magnitude in terms of online runtime and bandwidth efficiency. Specifically, GAZELLE achieves 20× and 30× improvements in online runtime compared to MiniONN, and 30× and 2.5× improvements compared to Chameleon, respectively. Additionally, GAZELLE hides more information about the neural network architecture and parameters, providing enhanced privacy guarantees.
Reach us at info@study.space