October 25, 2016 | Martín Abadi, Andy Chu, H. Brendan McMahan, Ilya Mironov, Li Zhang, Ian Goodfellow, Kunal Talwar
This paper presents a novel approach to training deep neural networks while preserving differential privacy. The authors develop new algorithmic techniques and a refined analysis of privacy costs within the differential privacy framework. They demonstrate that it is possible to train deep neural networks with non-convex objectives under a modest privacy budget, achieving manageable costs in software complexity, training efficiency, and model quality. The paper includes an introduction to differential privacy and deep learning, a detailed description of the proposed approach, and experimental results on the MNIST and CIFAR-10 datasets. The authors also discuss related work and conclude with a discussion of future directions. Key contributions include the development of a moments accountant for tighter privacy loss bounds and the implementation of differentially private SGD in TensorFlow.This paper presents a novel approach to training deep neural networks while preserving differential privacy. The authors develop new algorithmic techniques and a refined analysis of privacy costs within the differential privacy framework. They demonstrate that it is possible to train deep neural networks with non-convex objectives under a modest privacy budget, achieving manageable costs in software complexity, training efficiency, and model quality. The paper includes an introduction to differential privacy and deep learning, a detailed description of the proposed approach, and experimental results on the MNIST and CIFAR-10 datasets. The authors also discuss related work and conclude with a discussion of future directions. Key contributions include the development of a moments accountant for tighter privacy loss bounds and the implementation of differentially private SGD in TensorFlow.