This paper extends Valiant's learning model to incorporate classification noise, where the labels provided with random examples may be corrupted. The authors formalize a weak sufficient condition on learning algorithms that allows the derivation of noise-tolerant learning algorithms. They introduce a new model of learning from statistical queries, where the standard Valiant model oracle is replaced by a weaker oracle that provides accurate estimates of probabilities over the sample space generated by the Valiant model oracle. This new model captures algorithms that construct hypotheses based on statistical properties of large samples rather than individual examples. The main theorem shows that any class efficiently learnable from statistical queries is also efficiently learnable with classification noise, even with respect to specific distributions. The authors provide efficient noise-tolerant learning algorithms for various concept classes, including perceptrons, conjunctions, 2-dimensional axis-aligned rectangles, and $AC^0$ circuits. They also prove that the parity function class, known to be efficiently learnable in the Valiant model, cannot be efficiently learned from statistical queries. The paper discusses the query complexity in the statistical query model and shows that for any concept class with Vapnik-Chervonenkis dimension $d$, a statistical query algorithm must make at least $\Omega(d / \log d)$ queries to achieve a hypothesis with error less than $\epsilon$. Finally, the authors equivalence of learning in the classification noise model and a more realistic model with a variable noise rate is explored.This paper extends Valiant's learning model to incorporate classification noise, where the labels provided with random examples may be corrupted. The authors formalize a weak sufficient condition on learning algorithms that allows the derivation of noise-tolerant learning algorithms. They introduce a new model of learning from statistical queries, where the standard Valiant model oracle is replaced by a weaker oracle that provides accurate estimates of probabilities over the sample space generated by the Valiant model oracle. This new model captures algorithms that construct hypotheses based on statistical properties of large samples rather than individual examples. The main theorem shows that any class efficiently learnable from statistical queries is also efficiently learnable with classification noise, even with respect to specific distributions. The authors provide efficient noise-tolerant learning algorithms for various concept classes, including perceptrons, conjunctions, 2-dimensional axis-aligned rectangles, and $AC^0$ circuits. They also prove that the parity function class, known to be efficiently learnable in the Valiant model, cannot be efficiently learned from statistical queries. The paper discusses the query complexity in the statistical query model and shows that for any concept class with Vapnik-Chervonenkis dimension $d$, a statistical query algorithm must make at least $\Omega(d / \log d)$ queries to achieve a hypothesis with error less than $\epsilon$. Finally, the authors equivalence of learning in the classification noise model and a more realistic model with a variable noise rate is explored.