Ant Colony Sampling with GFlowNets for Combinatorial Optimization

Ant Colony Sampling with GFlowNets for Combinatorial Optimization

22 May 2024 | Minsu Kim, Sanghyeok Choi, Hyeonah Kim, Jiwoo Son, Jinkyoo Park, Yoshua Bengio
This paper introduces the Generative Flow Ant Colony Sampler (GFACS), a novel neural-guided probabilistic search algorithm for combinatorial optimization (CO). GFACS integrates generative flow networks (GFlowNets) with ant colony optimization (ACO) to enhance the performance of ACO by providing an informed prior distribution over decision variables. GFlowNets are used to learn a constructive policy in combinatorial spaces, while ACO employs a flexible solution generation scheme. The paper also presents a novel off-policy training algorithm for scaling conditional GFlowNets into large-scale combinatorial spaces, leveraging local search and shared energy normalization. The main contributions of the paper are: 1. Introducing a new viewpoint to ACO using amortized inference instead of policy gradient RL. 2. Presenting a novel GFlowNet training algorithm for large-scale combinatorial tasks. The paper demonstrates that GFACS outperforms baseline ACO algorithms in seven CO tasks and is competitive with problem-specific heuristics for vehicle routing problems. Experimental results show that GFACS consistently outperforms both classic ACO algorithms and DeepACO across various CO tasks, including routing problems, scheduling problems, and a grouping problem. The proposed off-policy training method enhances GFACS performance, demonstrating its effectiveness for GFlowNets. The paper also includes a detailed discussion on the limitations and future directions, highlighting the importance of effective priors using GFlowNets for efficient pheromone posterior updates in large combinatorial spaces.This paper introduces the Generative Flow Ant Colony Sampler (GFACS), a novel neural-guided probabilistic search algorithm for combinatorial optimization (CO). GFACS integrates generative flow networks (GFlowNets) with ant colony optimization (ACO) to enhance the performance of ACO by providing an informed prior distribution over decision variables. GFlowNets are used to learn a constructive policy in combinatorial spaces, while ACO employs a flexible solution generation scheme. The paper also presents a novel off-policy training algorithm for scaling conditional GFlowNets into large-scale combinatorial spaces, leveraging local search and shared energy normalization. The main contributions of the paper are: 1. Introducing a new viewpoint to ACO using amortized inference instead of policy gradient RL. 2. Presenting a novel GFlowNet training algorithm for large-scale combinatorial tasks. The paper demonstrates that GFACS outperforms baseline ACO algorithms in seven CO tasks and is competitive with problem-specific heuristics for vehicle routing problems. Experimental results show that GFACS consistently outperforms both classic ACO algorithms and DeepACO across various CO tasks, including routing problems, scheduling problems, and a grouping problem. The proposed off-policy training method enhances GFACS performance, demonstrating its effectiveness for GFlowNets. The paper also includes a detailed discussion on the limitations and future directions, highlighting the importance of effective priors using GFlowNets for efficient pheromone posterior updates in large combinatorial spaces.
Reach us at info@study.space
Understanding Ant Colony Sampling with GFlowNets for Combinatorial Optimization