2020 | Yoshua Bengio, Andrea Lodi, Antoine Prouvost
This paper surveys recent efforts to integrate machine learning (ML) with combinatorial optimization (CO) to solve complex optimization problems. CO problems, particularly those that are NP-hard, are challenging to solve due to their computational hardness. Traditional algorithms often rely on handcrafted heuristics, which can be inefficient or suboptimal. Machine learning, with its ability to handle large datasets and complex structures, offers a promising approach to improve these algorithms.
The authors advocate for further integration of ML and CO, emphasizing the importance of treating generic optimization problems as data points and identifying relevant problem distributions for learning. They highlight two main motivations for using ML in CO: approximation and discovery of new policies. Approximation involves using ML to approximate computationally expensive decisions, while discovery involves learning new policies through trial and error or reinforcement learning.
The paper reviews various approaches, including:
1. **Imitation Learning**: Using ML to approximate expert decisions, such as selecting cutting planes in mixed-integer linear programming (MILP) or branching variables in branch-and-bound (B&B) trees.
2. **Reinforcement Learning**: Learning new policies through trial and error, such as selecting nodes in the traveling salesman problem (TSP) using graph neural networks (GNNs).
3. **End-to-End Learning**: Training ML models to output solutions directly from input instances, such as using pointer networks for TSP.
The authors also discuss the challenges and limitations of these approaches, emphasizing the need for generalization and the importance of understanding the underlying problem distribution. They conclude by highlighting the potential of combining ML and CO to create more efficient and effective algorithms for solving complex optimization problems.This paper surveys recent efforts to integrate machine learning (ML) with combinatorial optimization (CO) to solve complex optimization problems. CO problems, particularly those that are NP-hard, are challenging to solve due to their computational hardness. Traditional algorithms often rely on handcrafted heuristics, which can be inefficient or suboptimal. Machine learning, with its ability to handle large datasets and complex structures, offers a promising approach to improve these algorithms.
The authors advocate for further integration of ML and CO, emphasizing the importance of treating generic optimization problems as data points and identifying relevant problem distributions for learning. They highlight two main motivations for using ML in CO: approximation and discovery of new policies. Approximation involves using ML to approximate computationally expensive decisions, while discovery involves learning new policies through trial and error or reinforcement learning.
The paper reviews various approaches, including:
1. **Imitation Learning**: Using ML to approximate expert decisions, such as selecting cutting planes in mixed-integer linear programming (MILP) or branching variables in branch-and-bound (B&B) trees.
2. **Reinforcement Learning**: Learning new policies through trial and error, such as selecting nodes in the traveling salesman problem (TSP) using graph neural networks (GNNs).
3. **End-to-End Learning**: Training ML models to output solutions directly from input instances, such as using pointer networks for TSP.
The authors also discuss the challenges and limitations of these approaches, emphasizing the need for generalization and the importance of understanding the underlying problem distribution. They conclude by highlighting the potential of combining ML and CO to create more efficient and effective algorithms for solving complex optimization problems.