Smooth Tchebycheff Scalarization for Multi-Objective Optimization

Smooth Tchebycheff Scalarization for Multi-Objective Optimization

2024 | Xi Lin, Xiaoyuan Zhang, Zhiyuan Yang, Fei Liu, Zhenkun Wang, Qingfu Zhang
This paper proposes a smooth Tchebycheff (STCH) scalarization method for efficient gradient-based multi-objective optimization. The method addresses the limitations of existing scalarization and adaptive gradient approaches by leveraging smooth optimization techniques to achieve faster convergence and lower computational complexity. The STCH scalarization is a smooth approximation of the classical Tchebycheff scalarization, which is known for its theoretical properties but suffers from slow convergence due to its nonsmooth nature. By using a smoothing technique, the STCH scalarization becomes differentiable, enabling efficient gradient-based optimization while maintaining the ability to find all Pareto solutions under certain conditions. The method is shown to be effective in both single Pareto solution finding and Pareto set learning, outperforming other scalarization and adaptive gradient methods in various real-world applications. Experimental results on multi-task learning and engineering design problems demonstrate the effectiveness and efficiency of the proposed method. The STCH scalarization provides a promising alternative to existing methods, offering a balance between theoretical guarantees and practical performance.This paper proposes a smooth Tchebycheff (STCH) scalarization method for efficient gradient-based multi-objective optimization. The method addresses the limitations of existing scalarization and adaptive gradient approaches by leveraging smooth optimization techniques to achieve faster convergence and lower computational complexity. The STCH scalarization is a smooth approximation of the classical Tchebycheff scalarization, which is known for its theoretical properties but suffers from slow convergence due to its nonsmooth nature. By using a smoothing technique, the STCH scalarization becomes differentiable, enabling efficient gradient-based optimization while maintaining the ability to find all Pareto solutions under certain conditions. The method is shown to be effective in both single Pareto solution finding and Pareto set learning, outperforming other scalarization and adaptive gradient methods in various real-world applications. Experimental results on multi-task learning and engineering design problems demonstrate the effectiveness and efficiency of the proposed method. The STCH scalarization provides a promising alternative to existing methods, offering a balance between theoretical guarantees and practical performance.
Reach us at info@study.space
[slides] Smooth Tchebycheff Scalarization for Multi-Objective Optimization | StudySpace