Neural Network Ensembles

Neural Network Ensembles

OCTOBER 1990 | LARS KAI HANSEN AND PETER SALAMON
This paper proposes methods to improve the performance and training of neural networks for classification. It uses cross-validation to optimize network parameters and architecture, and shows that residual generalization error can be reduced by using ensembles of similar networks. The paper discusses the use of cross-validation for network optimization, where a database is split into training and test sets to evaluate the network's ability to generalize. It also introduces the concept of ensembles, where multiple networks are used together to make decisions through a consensus scheme. The paper argues that ensembles can be more reliable than individual networks, as the errors of different networks tend to be on different subsets of the input space. The paper also discusses the problem of many local minima in neural network training, and how different search methods and objectives can affect the performance of networks. It presents models for the performance of ensembles, including the effects of input difficulty and network proficiency. The paper includes experiments on two examples, showing that ensembles can significantly improve classification performance. The results show that ensembles outperform individual networks, and that cross-validation is an effective tool for optimizing network architecture. The paper concludes that using ensembles with a plurality consensus scheme can lead to better performance than using a single network.This paper proposes methods to improve the performance and training of neural networks for classification. It uses cross-validation to optimize network parameters and architecture, and shows that residual generalization error can be reduced by using ensembles of similar networks. The paper discusses the use of cross-validation for network optimization, where a database is split into training and test sets to evaluate the network's ability to generalize. It also introduces the concept of ensembles, where multiple networks are used together to make decisions through a consensus scheme. The paper argues that ensembles can be more reliable than individual networks, as the errors of different networks tend to be on different subsets of the input space. The paper also discusses the problem of many local minima in neural network training, and how different search methods and objectives can affect the performance of networks. It presents models for the performance of ensembles, including the effects of input difficulty and network proficiency. The paper includes experiments on two examples, showing that ensembles can significantly improve classification performance. The results show that ensembles outperform individual networks, and that cross-validation is an effective tool for optimizing network architecture. The paper concludes that using ensembles with a plurality consensus scheme can lead to better performance than using a single network.
Reach us at info@study.space