This paper proposes an alternative extension of the Hager–Zhang nonlinear conjugate gradient method for vector optimization. The authors address the issue raised by Gonçalves and Prudente, who noted that directly extending the Hager–Zhang method for vector optimization may not ensure descent in the vector sense, even with an exact line search. They introduce a self-adjusting Hager–Zhang conjugate gradient method that maintains the scalar property of ensuring sufficient descent without relying on line searches or convexity assumptions. The global convergence of this new method is proven under mild assumptions, and it is tested using the Wolfe line search. Numerical experiments are conducted to demonstrate the practical performance of the proposed method. The paper also provides a detailed introduction to vector optimization problems and the background on scalarization techniques and other methods used in this field.This paper proposes an alternative extension of the Hager–Zhang nonlinear conjugate gradient method for vector optimization. The authors address the issue raised by Gonçalves and Prudente, who noted that directly extending the Hager–Zhang method for vector optimization may not ensure descent in the vector sense, even with an exact line search. They introduce a self-adjusting Hager–Zhang conjugate gradient method that maintains the scalar property of ensuring sufficient descent without relying on line searches or convexity assumptions. The global convergence of this new method is proven under mild assumptions, and it is tested using the Wolfe line search. Numerical experiments are conducted to demonstrate the practical performance of the proposed method. The paper also provides a detailed introduction to vector optimization problems and the background on scalarization techniques and other methods used in this field.