April 1994 | M. Srinivas, and L. M. Patnaik, Fellow, IEEE
This paper presents an Adaptive Genetic Algorithm (AGA) for multimodal function optimization. The AGA uses adaptive probabilities of crossover (pc) and mutation (pm) to maintain population diversity and ensure convergence. The probabilities are adjusted based on the fitness values of solutions, with high-fitness solutions being protected and low-fitness solutions being disrupted. This approach eliminates the need to predefine pc and pm values, as they are determined adaptively by the algorithm. The AGA is compared with previous methods for adapting operator probabilities in genetic algorithms, and the Schema theorem is derived for the AGA. The AGA is shown to converge to the global optimum in fewer generations and get stuck at local optima less frequently than the Standard GA (SGA). Experimental results demonstrate that the AGA's performance improves with increasing epistasy and multimodality of the objective function. The AGA is also compared with other adaptive strategies, showing that it is more effective in preventing premature convergence and maintaining diversity. The AGA's performance is evaluated on various test problems, including multimodal functions, the traveling salesman problem (TSP), and neural networks. The results show that the AGA outperforms the SGA in most cases, particularly for complex problems. The AGA's adaptive probabilities of crossover and mutation are shown to be effective in balancing exploration and exploitation, leading to better convergence and performance. The paper concludes that the AGA is a promising approach for self-organizing genetic algorithms capable of adapting to find the global optimum in multimodal landscapes.This paper presents an Adaptive Genetic Algorithm (AGA) for multimodal function optimization. The AGA uses adaptive probabilities of crossover (pc) and mutation (pm) to maintain population diversity and ensure convergence. The probabilities are adjusted based on the fitness values of solutions, with high-fitness solutions being protected and low-fitness solutions being disrupted. This approach eliminates the need to predefine pc and pm values, as they are determined adaptively by the algorithm. The AGA is compared with previous methods for adapting operator probabilities in genetic algorithms, and the Schema theorem is derived for the AGA. The AGA is shown to converge to the global optimum in fewer generations and get stuck at local optima less frequently than the Standard GA (SGA). Experimental results demonstrate that the AGA's performance improves with increasing epistasy and multimodality of the objective function. The AGA is also compared with other adaptive strategies, showing that it is more effective in preventing premature convergence and maintaining diversity. The AGA's performance is evaluated on various test problems, including multimodal functions, the traveling salesman problem (TSP), and neural networks. The results show that the AGA outperforms the SGA in most cases, particularly for complex problems. The AGA's adaptive probabilities of crossover and mutation are shown to be effective in balancing exploration and exploitation, leading to better convergence and performance. The paper concludes that the AGA is a promising approach for self-organizing genetic algorithms capable of adapting to find the global optimum in multimodal landscapes.