An empirical comparison of voting classification algorithms, including Bagging and boosting, is presented. The study evaluates how these methods affect classification error using decision trees and Naive-Bayes as base classifiers. Bagging reduces variance of unstable methods, while boosting methods like AdaBoost and Arc-x4 reduce both bias and variance but increase variance for Naive-Bayes. Arc-x4 behaves differently than AdaBoost when reweighting is used instead of resampling. Voting variants include pruning, probabilistic estimates, weight perturbations (Wagging), and backfitting. Bagging improves with probabilistic estimates and backfitting. The study shows that boosting methods significantly reduce mean-squared error compared to non-voting methods. Practical issues like numerical instability and underflow are addressed. The paper also discusses the bias-variance decomposition of error, showing how different methods influence these terms. The results indicate that Bagging and boosting methods improve classification accuracy, with boosting methods showing greater improvements in error reduction. The study highlights the importance of choosing appropriate algorithms and parameters for different datasets. The findings suggest that Bagging and boosting are effective for improving classification accuracy, but their performance depends on the stability of the base classifiers and the choice of parameters. The study provides insights into the behavior of voting algorithms and their effectiveness in reducing classification error.An empirical comparison of voting classification algorithms, including Bagging and boosting, is presented. The study evaluates how these methods affect classification error using decision trees and Naive-Bayes as base classifiers. Bagging reduces variance of unstable methods, while boosting methods like AdaBoost and Arc-x4 reduce both bias and variance but increase variance for Naive-Bayes. Arc-x4 behaves differently than AdaBoost when reweighting is used instead of resampling. Voting variants include pruning, probabilistic estimates, weight perturbations (Wagging), and backfitting. Bagging improves with probabilistic estimates and backfitting. The study shows that boosting methods significantly reduce mean-squared error compared to non-voting methods. Practical issues like numerical instability and underflow are addressed. The paper also discusses the bias-variance decomposition of error, showing how different methods influence these terms. The results indicate that Bagging and boosting methods improve classification accuracy, with boosting methods showing greater improvements in error reduction. The study highlights the importance of choosing appropriate algorithms and parameters for different datasets. The findings suggest that Bagging and boosting are effective for improving classification accuracy, but their performance depends on the stability of the base classifiers and the choice of parameters. The study provides insights into the behavior of voting algorithms and their effectiveness in reducing classification error.