This paper explores the phenomenon of learning from a computational perspective, focusing on the acquisition of knowledge without explicit programming. It introduces a methodology for studying learning through the selection of appropriate information gathering mechanisms (learning protocols) and the exploration of the class of concepts that can be learned in polynomial time. The paper highlights the limitations imposed by algorithmic complexity but demonstrates that certain nontrivial classes of propositional concepts can be learned in a realistic manner.
The main contributions of the paper include:
1. Designing learning machines that can provably learn whole classes of concepts, characterized by specific classes of Boolean expressions.
2. Ensuring that these classes are appropriate and nontrivial for general-purpose knowledge.
3. Ensuring that the computational process required to deduce the desired programs is feasible (polynomial).
The paper discusses two types of information supply: positive examples and oracles. It also addresses the choice of knowledge representation, favoring Boolean functions over other forms like formal grammars or geometrical constructs. The adequacy of propositional calculus for representing knowledge in practical learning systems is a crucial and controversial question, with the paper suggesting that it is sufficient for the purposes of the study.
The paper provides specific examples of learnable classes, including conjunctive normal form expressions with bounded literals, monotone disjunctive normal form expressions, and arbitrary expressions where each variable occurs exactly once. It also presents a combinatorial bound and detailed algorithms for learning these classes, demonstrating that highly convergent learning is possible for whole classes of Boolean functions.
Finally, the paper discusses the learnability of μ-expressions, which are expressions where each variable appears at most once, using more powerful oracles. It provides a detailed algorithm for learning μ-expressions, showing that it can deduce the correct expression with high probability.
Overall, the paper advances the understanding of the limits and possibilities of what can be learned computationally, providing a rigorous framework for studying learning phenomena.This paper explores the phenomenon of learning from a computational perspective, focusing on the acquisition of knowledge without explicit programming. It introduces a methodology for studying learning through the selection of appropriate information gathering mechanisms (learning protocols) and the exploration of the class of concepts that can be learned in polynomial time. The paper highlights the limitations imposed by algorithmic complexity but demonstrates that certain nontrivial classes of propositional concepts can be learned in a realistic manner.
The main contributions of the paper include:
1. Designing learning machines that can provably learn whole classes of concepts, characterized by specific classes of Boolean expressions.
2. Ensuring that these classes are appropriate and nontrivial for general-purpose knowledge.
3. Ensuring that the computational process required to deduce the desired programs is feasible (polynomial).
The paper discusses two types of information supply: positive examples and oracles. It also addresses the choice of knowledge representation, favoring Boolean functions over other forms like formal grammars or geometrical constructs. The adequacy of propositional calculus for representing knowledge in practical learning systems is a crucial and controversial question, with the paper suggesting that it is sufficient for the purposes of the study.
The paper provides specific examples of learnable classes, including conjunctive normal form expressions with bounded literals, monotone disjunctive normal form expressions, and arbitrary expressions where each variable occurs exactly once. It also presents a combinatorial bound and detailed algorithms for learning these classes, demonstrating that highly convergent learning is possible for whole classes of Boolean functions.
Finally, the paper discusses the learnability of μ-expressions, which are expressions where each variable appears at most once, using more powerful oracles. It provides a detailed algorithm for learning μ-expressions, showing that it can deduce the correct expression with high probability.
Overall, the paper advances the understanding of the limits and possibilities of what can be learned computationally, providing a rigorous framework for studying learning phenomena.