Constrained Multi-objective Optimization with Deep Reinforcement Learning Assisted Operator Selection

Constrained Multi-objective Optimization with Deep Reinforcement Learning Assisted Operator Selection

15 Jan 2024 | Fei Ming, Wenyin Gong, Member, IEEE, Ling Wang, Member, IEEE, and Yaochu Jin, Fellow, IEEE
This paper addresses the challenge of operator selection in constrained multi-objective evolutionary algorithms (CMOEAs) by proposing an online operator selection framework assisted by Deep Reinforcement Learning (DRL). The framework uses the population's convergence, diversity, and feasibility as states, candidate operators as actions, and the improvement of these states as rewards. A Q-Network is employed to learn a policy that estimates the Q-values of all actions, enabling the algorithm to adaptively select the most effective operator based on the current population state. The proposed method is evaluated on 42 benchmark problems using four popular CMOEAs, showing significant improvements in performance compared to nine state-of-the-art CMOEAs. The experimental results demonstrate that the DRL-assisted operator selection significantly enhances the versatility and performance of CMOEAs. The paper also discusses the effectiveness of the proposed method in handling different types of constrained multi-objective optimization problems and provides insights into the impact of various parameters on the algorithm's performance.This paper addresses the challenge of operator selection in constrained multi-objective evolutionary algorithms (CMOEAs) by proposing an online operator selection framework assisted by Deep Reinforcement Learning (DRL). The framework uses the population's convergence, diversity, and feasibility as states, candidate operators as actions, and the improvement of these states as rewards. A Q-Network is employed to learn a policy that estimates the Q-values of all actions, enabling the algorithm to adaptively select the most effective operator based on the current population state. The proposed method is evaluated on 42 benchmark problems using four popular CMOEAs, showing significant improvements in performance compared to nine state-of-the-art CMOEAs. The experimental results demonstrate that the DRL-assisted operator selection significantly enhances the versatility and performance of CMOEAs. The paper also discusses the effectiveness of the proposed method in handling different types of constrained multi-objective optimization problems and provides insights into the impact of various parameters on the algorithm's performance.
Reach us at info@study.space