This paper introduces a linearized alternating direction method with adaptive penalty (LADMAP) for low-rank representation (LRR), a method widely used in subspace clustering and machine learning. The existing LRR solvers, based on the alternating direction method (ADM), suffer from high computational complexity due to matrix-matrix multiplications and inversions, even with partial SVD. Additionally, introducing auxiliary variables slows down convergence. To address these issues, LADMAP linearizes the quadratic penalty term and allows the penalty to change adaptively, eliminating the need for auxiliary variables and matrix inversions. A novel rule for updating the penalty is proposed to enhance convergence speed. By using skinny SVD representation and advanced functionalities of the PROPACK package, the complexity of solving LRR is reduced to $O(rn^2)$, where $r$ is the rank of the representation matrix. Numerical experiments show that LADMAP outperforms state-of-the-art algorithms in terms of speed and accuracy, making it suitable for large-scale applications. The method is also applicable to solving more general convex programs.This paper introduces a linearized alternating direction method with adaptive penalty (LADMAP) for low-rank representation (LRR), a method widely used in subspace clustering and machine learning. The existing LRR solvers, based on the alternating direction method (ADM), suffer from high computational complexity due to matrix-matrix multiplications and inversions, even with partial SVD. Additionally, introducing auxiliary variables slows down convergence. To address these issues, LADMAP linearizes the quadratic penalty term and allows the penalty to change adaptively, eliminating the need for auxiliary variables and matrix inversions. A novel rule for updating the penalty is proposed to enhance convergence speed. By using skinny SVD representation and advanced functionalities of the PROPACK package, the complexity of solving LRR is reduced to $O(rn^2)$, where $r$ is the rank of the representation matrix. Numerical experiments show that LADMAP outperforms state-of-the-art algorithms in terms of speed and accuracy, making it suitable for large-scale applications. The method is also applicable to solving more general convex programs.