Local Search in Combinatorial Optimization

Local Search in Combinatorial Optimization

| Y. Crama¹, A.W.J. Kolen², E.J. Pesch³
Local search is a fundamental approach in combinatorial optimization, where the goal is to find a solution that minimizes an objective function. It starts with an initial feasible solution and iteratively moves to neighboring solutions that improve the objective value. Neighbors are defined as solutions that differ slightly from the current one. This method is widely used in optimization, including classical methods like the gradient method and simplex method, and also in neural networks such as Hopfield nets. The objective function corresponds to the energy function of the network, and neighbors are configurations that differ in the state of one neuron. Implementing local search involves choosing an initial solution, defining neighborhoods, and selecting improving neighbors. While initial solutions are often easy to find, the choice can significantly affect the final result. Neighborhood size is a critical trade-off: small neighborhoods are easy to search but offer limited improvement, while large ones are more complex but may lead to optimal solutions. The selection of a neighbor that improves the objective function is crucial, but theoretical analysis is often insufficient, requiring experimentation. Local search is attractive due to its wide applicability and low computational complexity. It can handle complex problems where analytical models are impractical. However, it has limitations, such as stopping at local optima, which may not be global. To overcome this, extensions like simulated annealing, tabu search, and genetic algorithms have been developed. These methods allow for occasional objective function degradations to escape local optima. The paper discusses these approaches and their applications in scheduling and other combinatorial optimization problems.Local search is a fundamental approach in combinatorial optimization, where the goal is to find a solution that minimizes an objective function. It starts with an initial feasible solution and iteratively moves to neighboring solutions that improve the objective value. Neighbors are defined as solutions that differ slightly from the current one. This method is widely used in optimization, including classical methods like the gradient method and simplex method, and also in neural networks such as Hopfield nets. The objective function corresponds to the energy function of the network, and neighbors are configurations that differ in the state of one neuron. Implementing local search involves choosing an initial solution, defining neighborhoods, and selecting improving neighbors. While initial solutions are often easy to find, the choice can significantly affect the final result. Neighborhood size is a critical trade-off: small neighborhoods are easy to search but offer limited improvement, while large ones are more complex but may lead to optimal solutions. The selection of a neighbor that improves the objective function is crucial, but theoretical analysis is often insufficient, requiring experimentation. Local search is attractive due to its wide applicability and low computational complexity. It can handle complex problems where analytical models are impractical. However, it has limitations, such as stopping at local optima, which may not be global. To overcome this, extensions like simulated annealing, tabu search, and genetic algorithms have been developed. These methods allow for occasional objective function degradations to escape local optima. The paper discusses these approaches and their applications in scheduling and other combinatorial optimization problems.
Reach us at info@study.space