Adaptive Probabilities of Crossover and Mutation in Genetic Algorithms

Adaptive Probabilities of Crossover and Mutation in Genetic Algorithms

VOL. 24, NO. 4, APRIL 1994 | M. Srinivas, and L. M. Patnaik, Fellow, IEEE
This paper introduces an efficient approach for multimodal function optimization using Genetic Algorithms (GAs). The authors recommend the use of adaptive probabilities of crossover and mutation to maintain population diversity and sustain convergence capacity. In the Adaptive Genetic Algorithm (AGA), the probabilities of crossover ($p_c$) and mutation ($p_m$) are adjusted based on the fitness values of solutions. High fitness values increase $p_c$ and $p_m$, while low fitness values disrupt solutions more severely. This approach eliminates the need to manually specify optimal values for $p_c$ and $p_m$. The AGA is compared with previous methods for adapting operator probabilities, and the Schema theorem is derived for the AGA. Experimental results show that the AGA converges to global optima in fewer generations and fewer times getting stuck at local optima compared to traditional GAs (SGA). The performance of the AGA improves as the epistasis and multimodality of the objective function increase. The authors believe that the AGA is a step towards self-organizing GAs capable of adapting to find global optima in multimodal landscapes.This paper introduces an efficient approach for multimodal function optimization using Genetic Algorithms (GAs). The authors recommend the use of adaptive probabilities of crossover and mutation to maintain population diversity and sustain convergence capacity. In the Adaptive Genetic Algorithm (AGA), the probabilities of crossover ($p_c$) and mutation ($p_m$) are adjusted based on the fitness values of solutions. High fitness values increase $p_c$ and $p_m$, while low fitness values disrupt solutions more severely. This approach eliminates the need to manually specify optimal values for $p_c$ and $p_m$. The AGA is compared with previous methods for adapting operator probabilities, and the Schema theorem is derived for the AGA. Experimental results show that the AGA converges to global optima in fewer generations and fewer times getting stuck at local optima compared to traditional GAs (SGA). The performance of the AGA improves as the epistasis and multimodality of the objective function increase. The authors believe that the AGA is a step towards self-organizing GAs capable of adapting to find global optima in multimodal landscapes.
Reach us at info@study.space