A New Insight on Augmented Lagrangian Method with Applications in Machine Learning

A New Insight on Augmented Lagrangian Method with Applications in Machine Learning

28 February 2024 / Published online: 13 April 2024 | Jianchao Bai, Linyuan Jia, Zheng Peng
This paper introduces a novel relaxed augmented Lagrangian method (P-rALM) for solving convex optimization problems with equality or inequality constraints. The method leverages double-penalty terms in the primal subproblem to enhance its performance. It is extended to handle general multi-block separable convex optimization problems, and two related primal-dual hybrid gradient algorithms are discussed. The paper establishes convergence results for both sublinear and linear convergence rates, based on variational characterizations of the saddle-point and first-order optimality conditions. Extensive experiments on linear support vector machine and robust principal component analysis problems from machine learning demonstrate that the proposed algorithms outperform several state-of-the-art methods. The key contributions include a relaxation step that simplifies the dual variable update while maintaining weak convergence properties.This paper introduces a novel relaxed augmented Lagrangian method (P-rALM) for solving convex optimization problems with equality or inequality constraints. The method leverages double-penalty terms in the primal subproblem to enhance its performance. It is extended to handle general multi-block separable convex optimization problems, and two related primal-dual hybrid gradient algorithms are discussed. The paper establishes convergence results for both sublinear and linear convergence rates, based on variational characterizations of the saddle-point and first-order optimality conditions. Extensive experiments on linear support vector machine and robust principal component analysis problems from machine learning demonstrate that the proposed algorithms outperform several state-of-the-art methods. The key contributions include a relaxation step that simplifies the dual variable update while maintaining weak convergence properties.
Reach us at info@study.space
[slides and audio] A New Insight on Augmented Lagrangian Method with Applications in Machine Learning