October 1989 | ANSELM BLUMER, ANDRZEJ EHRENFEUCHT, DAVID HAUSSLER AND MANFRED K. WARMUTH
This paper extends Valiant's learnability model to learning concepts defined by regions in Euclidean space \(E^n\). The authors introduce the Vapnik-Chervonenkis (VC) dimension, a combinatorial parameter, to analyze the complexity and closure properties of learnable classes. They show that the essential condition for distribution-free learnability is the finiteness of the VC dimension. The paper provides necessary and sufficient conditions for feasible learnability, including bounds on the sample complexity. The authors also discuss computational feasibility, presenting two types of learning algorithms: one for domains of arbitrary Euclidean dimension and another for fixed domains with varying target complexity. They characterize polynomial learnability with respect to domain dimension and target complexity, demonstrating that certain concept classes are polynomially learnable under specific conditions. The results have implications for various pattern recognition algorithms and provide insights into the relationship between data compression and learning.This paper extends Valiant's learnability model to learning concepts defined by regions in Euclidean space \(E^n\). The authors introduce the Vapnik-Chervonenkis (VC) dimension, a combinatorial parameter, to analyze the complexity and closure properties of learnable classes. They show that the essential condition for distribution-free learnability is the finiteness of the VC dimension. The paper provides necessary and sufficient conditions for feasible learnability, including bounds on the sample complexity. The authors also discuss computational feasibility, presenting two types of learning algorithms: one for domains of arbitrary Euclidean dimension and another for fixed domains with varying target complexity. They characterize polynomial learnability with respect to domain dimension and target complexity, demonstrating that certain concept classes are polynomially learnable under specific conditions. The results have implications for various pattern recognition algorithms and provide insights into the relationship between data compression and learning.