Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation

2 Sep 2011 | Anonymous Author(s)
This paper proposes a linearized alternating direction method with adaptive penalty (LADMAP) for solving low-rank representation (LRR) problems. LRR is a powerful method for subspace clustering and has wide applications in computer vision and machine learning. However, existing LRR solvers based on the alternating direction method (ADM) suffer from high computational complexity due to matrix-matrix multiplications and inversions, and the introduction of auxiliary variables slows convergence. To address these issues, LADMAP linearizes the quadratic penalty term and allows the penalty to change adaptively, eliminating the need for auxiliary variables and matrix inversions. This results in a significantly reduced computational complexity of O(rn²), where r is the rank of the representation matrix. The algorithm also employs a novel penalty update rule that accelerates convergence. Numerical experiments show that LADMAP is much faster than state-of-the-art algorithms for LRR. Additionally, LADMAP can be applied to solve more general convex programs. The paper also presents convergence analysis and stopping criteria for LADMAP, and demonstrates its effectiveness on synthetic and real-world data. The results show that LADMAP converges faster than other methods and achieves better performance in clustering tasks.This paper proposes a linearized alternating direction method with adaptive penalty (LADMAP) for solving low-rank representation (LRR) problems. LRR is a powerful method for subspace clustering and has wide applications in computer vision and machine learning. However, existing LRR solvers based on the alternating direction method (ADM) suffer from high computational complexity due to matrix-matrix multiplications and inversions, and the introduction of auxiliary variables slows convergence. To address these issues, LADMAP linearizes the quadratic penalty term and allows the penalty to change adaptively, eliminating the need for auxiliary variables and matrix inversions. This results in a significantly reduced computational complexity of O(rn²), where r is the rank of the representation matrix. The algorithm also employs a novel penalty update rule that accelerates convergence. Numerical experiments show that LADMAP is much faster than state-of-the-art algorithms for LRR. Additionally, LADMAP can be applied to solve more general convex programs. The paper also presents convergence analysis and stopping criteria for LADMAP, and demonstrates its effectiveness on synthetic and real-world data. The results show that LADMAP converges faster than other methods and achieves better performance in clustering tasks.
Reach us at info@study.space