Differentially Private Empirical Risk Minimization

Differentially Private Empirical Risk Minimization

October 25, 2018 | Kamalika Chaudhuri, Claire Monteleoni, Anand D. Sarwate
The paper "Differentially Private Empirical Risk Minimization" by Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate addresses the challenge of preserving privacy in machine learning while still achieving accurate predictions. The authors propose two methods for differentially private empirical risk minimization (ERM): output perturbation and objective perturbation. Output perturbation involves adding noise to the output of the standard ERM algorithm, while objective perturbation involves perturbing the objective function before optimizing over classifiers. The paper provides theoretical guarantees for the privacy and generalization performance of these methods under the $\epsilon$-differential privacy model. It also demonstrates the effectiveness of these methods through experiments on real datasets, showing that objective perturbation outperforms output perturbation in terms of both privacy and performance. The paper further extends these methods to logistic regression and support vector machines, providing privacy-preserving versions of these popular classification algorithms.The paper "Differentially Private Empirical Risk Minimization" by Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate addresses the challenge of preserving privacy in machine learning while still achieving accurate predictions. The authors propose two methods for differentially private empirical risk minimization (ERM): output perturbation and objective perturbation. Output perturbation involves adding noise to the output of the standard ERM algorithm, while objective perturbation involves perturbing the objective function before optimizing over classifiers. The paper provides theoretical guarantees for the privacy and generalization performance of these methods under the $\epsilon$-differential privacy model. It also demonstrates the effectiveness of these methods through experiments on real datasets, showing that objective perturbation outperforms output perturbation in terms of both privacy and performance. The paper further extends these methods to logistic regression and support vector machines, providing privacy-preserving versions of these popular classification algorithms.
Reach us at info@study.space