What Can We Learn Privately?

What Can We Learn Privately?

February 13, 2013 | Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhodnikova, Adam Smith
This paper investigates private learning, a computational task that aims to learn from data while preserving the privacy of individuals. The goal is to understand the resources required for private learning in terms of samples, computation time, and interaction. The authors demonstrate that, ignoring computational constraints, any concept class can be privately agnostically learned using a sample size approximately logarithmic in the cardinality of the concept class. This implies that almost anything learnable is learnable privately: if a concept class is learnable by a non-private algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. The paper also presents a computationally efficient private PAC learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning, as parity is thought to be very hard to learn given random classification noise. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. The authors provide a precise characterization of local private learning algorithms, showing that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. The paper also presents a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning. The authors also discuss the implications of these results, including the fact that "anything" learnable is privately learnable using few samples. The paper also discusses the limitations of local algorithms and the equivalence between local and SQ algorithms. The authors conclude that private learning is feasible and that the results provide a broad understanding of the classes of learning problems that are solvable subject to privacy constraints.This paper investigates private learning, a computational task that aims to learn from data while preserving the privacy of individuals. The goal is to understand the resources required for private learning in terms of samples, computation time, and interaction. The authors demonstrate that, ignoring computational constraints, any concept class can be privately agnostically learned using a sample size approximately logarithmic in the cardinality of the concept class. This implies that almost anything learnable is learnable privately: if a concept class is learnable by a non-private algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. The paper also presents a computationally efficient private PAC learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning, as parity is thought to be very hard to learn given random classification noise. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. The authors provide a precise characterization of local private learning algorithms, showing that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. The paper also presents a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning. The authors also discuss the implications of these results, including the fact that "anything" learnable is privately learnable using few samples. The paper also discusses the limitations of local algorithms and the equivalence between local and SQ algorithms. The authors conclude that private learning is feasible and that the results provide a broad understanding of the classes of learning problems that are solvable subject to privacy constraints.
Reach us at info@study.space
[slides] What Can We Learn Privately%3F | StudySpace