This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distribution-free (PAC) learning model. It shows that strong and weak learnability are equivalent. A method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction has practical applications and theoretical consequences, including general upper bounds on the complexity of strong learning algorithms as a function of the allowed error ε. The paper also discusses the equivalence of strong and weak learnability, and the implications of this equivalence for other learning models. The main result is a proof that strong and weak learnability are equivalent. The paper also discusses the space complexity of the learning algorithm and the implications of the equivalence for other learning models. The paper concludes with a discussion of the implications of the equivalence for the on-line learning model and the space efficiency of batch and on-line algorithms.This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distribution-free (PAC) learning model. It shows that strong and weak learnability are equivalent. A method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction has practical applications and theoretical consequences, including general upper bounds on the complexity of strong learning algorithms as a function of the allowed error ε. The paper also discusses the equivalence of strong and weak learnability, and the implications of this equivalence for other learning models. The main result is a proof that strong and weak learnability are equivalent. The paper also discusses the space complexity of the learning algorithm and the implications of the equivalence for other learning models. The paper concludes with a discussion of the implications of the equivalence for the on-line learning model and the space efficiency of batch and on-line algorithms.