October 25, 2018 | Kamalika Chaudhuri, Claire Monteleoni, Anand D. Sarwate
This paper presents differentially private algorithms for empirical risk minimization (ERM) in classification tasks. The authors propose two methods: output perturbation and objective perturbation. Output perturbation adds noise to the output of the ERM algorithm, while objective perturbation adds noise to the objective function before optimization. Both methods ensure $ \epsilon $-differential privacy, a strong privacy guarantee that protects individual data points. The authors show that objective perturbation outperforms output perturbation in terms of privacy and learning performance. They apply these methods to regularized logistic regression and support vector machines, achieving good results on real-world datasets. The paper also addresses the challenge of tuning parameters in privacy-preserving learning, providing a method that guarantees end-to-end privacy. Theoretical results show that these methods preserve privacy and provide generalization bounds for linear and nonlinear kernels. The authors demonstrate that objective perturbation is superior to output perturbation in managing the trade-off between privacy and learning performance. The paper contributes to the field of privacy-preserving machine learning by providing efficient, differentially private algorithms for ERM in classification tasks.This paper presents differentially private algorithms for empirical risk minimization (ERM) in classification tasks. The authors propose two methods: output perturbation and objective perturbation. Output perturbation adds noise to the output of the ERM algorithm, while objective perturbation adds noise to the objective function before optimization. Both methods ensure $ \epsilon $-differential privacy, a strong privacy guarantee that protects individual data points. The authors show that objective perturbation outperforms output perturbation in terms of privacy and learning performance. They apply these methods to regularized logistic regression and support vector machines, achieving good results on real-world datasets. The paper also addresses the challenge of tuning parameters in privacy-preserving learning, providing a method that guarantees end-to-end privacy. Theoretical results show that these methods preserve privacy and provide generalization bounds for linear and nonlinear kernels. The authors demonstrate that objective perturbation is superior to output perturbation in managing the trade-off between privacy and learning performance. The paper contributes to the field of privacy-preserving machine learning by providing efficient, differentially private algorithms for ERM in classification tasks.