This thesis by Frans van den Bergh, submitted for the degree of Philosophiae Doctor at the University of Pretoria, focuses on the analysis and improvement of Particle Swarm Optimizers (PSO). PSO is a relatively new optimization technique that has shown empirical success in solving various optimization problems, such as minimizing losses in power grids and training neural networks. The thesis aims to develop a theoretical model to describe the long-term behavior of PSO and to enhance the algorithm to guarantee convergence on local and global minima.
Key contributions of the thesis include:
1. **Theoretical Model**: A theoretical model is developed to predict the long-term behavior of PSO under different parameter settings.
2. **Enhanced PSO Algorithm**: An enhanced version of PSO is constructed, which guarantees convergence on local minima.
3. **Global Convergence**: The enhanced algorithm is further extended to guarantee convergence on global minima.
4. **Cooperative PSO Algorithms**: Two new cooperative PSO algorithms are introduced, based on successful models from other evolutionary algorithms.
5. **Empirical Validation**: Empirical results using synthetic benchmark functions support the theoretical properties of the algorithms.
The thesis also investigates the application of these PSO-based algorithms to neural network training, providing empirical evidence of their effectiveness. The work contributes to the field of optimization by advancing the understanding and practical implementation of PSO and related algorithms.This thesis by Frans van den Bergh, submitted for the degree of Philosophiae Doctor at the University of Pretoria, focuses on the analysis and improvement of Particle Swarm Optimizers (PSO). PSO is a relatively new optimization technique that has shown empirical success in solving various optimization problems, such as minimizing losses in power grids and training neural networks. The thesis aims to develop a theoretical model to describe the long-term behavior of PSO and to enhance the algorithm to guarantee convergence on local and global minima.
Key contributions of the thesis include:
1. **Theoretical Model**: A theoretical model is developed to predict the long-term behavior of PSO under different parameter settings.
2. **Enhanced PSO Algorithm**: An enhanced version of PSO is constructed, which guarantees convergence on local minima.
3. **Global Convergence**: The enhanced algorithm is further extended to guarantee convergence on global minima.
4. **Cooperative PSO Algorithms**: Two new cooperative PSO algorithms are introduced, based on successful models from other evolutionary algorithms.
5. **Empirical Validation**: Empirical results using synthetic benchmark functions support the theoretical properties of the algorithms.
The thesis also investigates the application of these PSO-based algorithms to neural network training, providing empirical evidence of their effectiveness. The work contributes to the field of optimization by advancing the understanding and practical implementation of PSO and related algorithms.