13 April 2024 | Jianchao Bai, Linyuan Jia, Zheng Peng
This paper presents a new insight into the Augmented Lagrangian Method (ALM) with applications in machine learning. The authors develop a novel relaxed ALM, called P-rALM, which uses double-penalty terms for the primal subproblem to solve a family of convex optimization problems with equality or inequality constraints. The method is extended to solve general multi-block separable convex optimization problems, and two related primal-dual hybrid gradient algorithms are discussed. The convergence rates of the method, including sublinear and linear convergence, are established through variational characterizations of the saddle-point and first-order optimality conditions. Experimental results on linear support vector machine and robust principal component analysis problems show that the proposed algorithms outperform several state-of-the-art algorithms.
The paper introduces the Balanced Augmented Lagrangian Method (BALM), which aims to solve convex optimization problems with equality or inequality constraints. The ALM is a fundamental tool for solving such problems, involving two steps: solving the primal subproblem and updating the dual variable. The core subproblem of ALM is often complex and difficult to solve without exact approximation techniques. The B-ALM, an improved version of ALM, simplifies the subproblem to a proximal estimation, which may have a closed-form solution in many practical applications. However, it requires an inner solver to tackle the dual subproblem or Cholesky factorization to solve an equivalent linear equation of the dual subproblem.
Motivated by these observations, the authors develop and analyze a new double-penalty ALM with a relaxation step (P-rALM) to solve the problem. The method reduces the difficulty of updating the dual variable while maintaining the weak convergence properties of ALM. The convergence conditions of B-ALM are less restrictive, as the parameter r does not depend on the spectrum radius of the matrix $ A^T A $. The major merit of B-ALM is that it simplifies the solving difficulty of the subproblems to a relatively easier proximal estimation, which may have a closed-form solution in many practical applications. However, it will need an inner solver to tackle the dual subproblem or the Cholesky factorization to deal with an equivalent linear equation of the dual subproblem.This paper presents a new insight into the Augmented Lagrangian Method (ALM) with applications in machine learning. The authors develop a novel relaxed ALM, called P-rALM, which uses double-penalty terms for the primal subproblem to solve a family of convex optimization problems with equality or inequality constraints. The method is extended to solve general multi-block separable convex optimization problems, and two related primal-dual hybrid gradient algorithms are discussed. The convergence rates of the method, including sublinear and linear convergence, are established through variational characterizations of the saddle-point and first-order optimality conditions. Experimental results on linear support vector machine and robust principal component analysis problems show that the proposed algorithms outperform several state-of-the-art algorithms.
The paper introduces the Balanced Augmented Lagrangian Method (BALM), which aims to solve convex optimization problems with equality or inequality constraints. The ALM is a fundamental tool for solving such problems, involving two steps: solving the primal subproblem and updating the dual variable. The core subproblem of ALM is often complex and difficult to solve without exact approximation techniques. The B-ALM, an improved version of ALM, simplifies the subproblem to a proximal estimation, which may have a closed-form solution in many practical applications. However, it requires an inner solver to tackle the dual subproblem or Cholesky factorization to solve an equivalent linear equation of the dual subproblem.
Motivated by these observations, the authors develop and analyze a new double-penalty ALM with a relaxation step (P-rALM) to solve the problem. The method reduces the difficulty of updating the dual variable while maintaining the weak convergence properties of ALM. The convergence conditions of B-ALM are less restrictive, as the parameter r does not depend on the spectrum radius of the matrix $ A^T A $. The major merit of B-ALM is that it simplifies the solving difficulty of the subproblems to a relatively easier proximal estimation, which may have a closed-form solution in many practical applications. However, it will need an inner solver to tackle the dual subproblem or the Cholesky factorization to deal with an equivalent linear equation of the dual subproblem.