A Theory of the Learnable

A Theory of the Learnable

November 1984 | L. G. Valiant
This paper presents a theory of learnable concepts through computational models, focusing on Boolean functions and their recognition. The authors define learning as knowledge acquisition without explicit programming, emphasizing the ability to deduce recognition algorithms for concepts using a learning protocol. The paper explores three key properties of learning machines: (1) they can learn whole classes of concepts, (2) these classes are appropriate and nontrivial for general knowledge, and (3) the computational process requires a feasible (polynomial) number of steps. The paper introduces a learning protocol involving two routines: EXAMPLES, which provides positive examples of a concept, and ORACLE, which tests whether a vector exemplifies the concept. The authors analyze the learnability of various classes of Boolean expressions, including bounded CNF (conjunctive normal form) expressions, monotone DNF (disjunctive normal form) expressions, and μ-expressions, where each variable appears at most once. For bounded CNF expressions, the paper demonstrates that they can be learned using only positive examples, without needing an oracle. For monotone DNF expressions, a learning algorithm is presented that uses both examples and oracles. The paper also discusses the limitations of learning, noting that certain classes of Boolean functions, such as unrestricted DNF expressions, may not be learnable due to computational complexity. The authors also introduce a combinatorial bound to analyze the learning process, showing that with a sufficient number of examples, a learner can approximate the target concept with high probability. The paper concludes that while learning is possible for certain classes of concepts, there are significant limitations, particularly in the case of complex or large-scale functions. The results highlight the importance of understanding the boundaries of what can be learned within the constraints of algorithmic complexity.This paper presents a theory of learnable concepts through computational models, focusing on Boolean functions and their recognition. The authors define learning as knowledge acquisition without explicit programming, emphasizing the ability to deduce recognition algorithms for concepts using a learning protocol. The paper explores three key properties of learning machines: (1) they can learn whole classes of concepts, (2) these classes are appropriate and nontrivial for general knowledge, and (3) the computational process requires a feasible (polynomial) number of steps. The paper introduces a learning protocol involving two routines: EXAMPLES, which provides positive examples of a concept, and ORACLE, which tests whether a vector exemplifies the concept. The authors analyze the learnability of various classes of Boolean expressions, including bounded CNF (conjunctive normal form) expressions, monotone DNF (disjunctive normal form) expressions, and μ-expressions, where each variable appears at most once. For bounded CNF expressions, the paper demonstrates that they can be learned using only positive examples, without needing an oracle. For monotone DNF expressions, a learning algorithm is presented that uses both examples and oracles. The paper also discusses the limitations of learning, noting that certain classes of Boolean functions, such as unrestricted DNF expressions, may not be learnable due to computational complexity. The authors also introduce a combinatorial bound to analyze the learning process, showing that with a sufficient number of examples, a learner can approximate the target concept with high probability. The paper concludes that while learning is possible for certain classes of concepts, there are significant limitations, particularly in the case of complex or large-scale functions. The results highlight the importance of understanding the boundaries of what can be learned within the constraints of algorithmic complexity.
Reach us at info@study.space
[slides and audio] A theory of the learnable