This paper presents an alternative extension of the Hager–Zhang nonlinear conjugate gradient method for vector optimization. The original method by Gonçalves and Prudente was found to not guarantee descent in the vector sense even with exact line search. They then introduced a self-adjusting method that ensures global convergence without requiring regular restarts or convexity assumptions. The authors propose an alternative extension that preserves the scalar property of the Hager–Zhang method, ensuring sufficient descent without relying on line search or convexity. The method's global convergence is analyzed under mild assumptions using the Wolfe line search. Numerical experiments are conducted to demonstrate the practical performance of the proposed method.
Vector optimization involves minimizing or maximizing a vector-valued function under certain constraints. The problem is defined as minimizing $ F(x) $ subject to $ x \in \mathbb{R}^n $, where $ F: \mathbb{R}^n \to \mathbb{R}^m $ is continuously differentiable and $ K \subseteq \mathbb{R}^m $ is a closed, convex, and pointed cone with non-empty interior. The partial order $ \preceq_K $ is defined by $ a \preceq_K b $ if $ b - a \in K $.
Vector optimization is used in various fields such as engineering design, space exploration, cancer treatment planning, and management science. The concept of optimality is replaced by Pareto optimality, where a point $ x^* $ is K-Pareto-optimal if no other point $ x $ satisfies $ F(x) \neq F(x^*) $ and $ F(x) \preceq_K F(x^*) $.
Several methods have been proposed for finding Pareto optimal points, including scalarization techniques and Pareto descent methods. The latter is parameter-free and does not require ordering or weighting information. Recent research has focused on extending single-objective optimization methods to vector optimization, with many relevant works published in the literature.This paper presents an alternative extension of the Hager–Zhang nonlinear conjugate gradient method for vector optimization. The original method by Gonçalves and Prudente was found to not guarantee descent in the vector sense even with exact line search. They then introduced a self-adjusting method that ensures global convergence without requiring regular restarts or convexity assumptions. The authors propose an alternative extension that preserves the scalar property of the Hager–Zhang method, ensuring sufficient descent without relying on line search or convexity. The method's global convergence is analyzed under mild assumptions using the Wolfe line search. Numerical experiments are conducted to demonstrate the practical performance of the proposed method.
Vector optimization involves minimizing or maximizing a vector-valued function under certain constraints. The problem is defined as minimizing $ F(x) $ subject to $ x \in \mathbb{R}^n $, where $ F: \mathbb{R}^n \to \mathbb{R}^m $ is continuously differentiable and $ K \subseteq \mathbb{R}^m $ is a closed, convex, and pointed cone with non-empty interior. The partial order $ \preceq_K $ is defined by $ a \preceq_K b $ if $ b - a \in K $.
Vector optimization is used in various fields such as engineering design, space exploration, cancer treatment planning, and management science. The concept of optimality is replaced by Pareto optimality, where a point $ x^* $ is K-Pareto-optimal if no other point $ x $ satisfies $ F(x) \neq F(x^*) $ and $ F(x) \preceq_K F(x^*) $.
Several methods have been proposed for finding Pareto optimal points, including scalarization techniques and Pareto descent methods. The latter is parameter-free and does not require ordering or weighting information. Recent research has focused on extending single-objective optimization methods to vector optimization, with many relevant works published in the literature.