Robust Principal Component Analysis?

Robust Principal Component Analysis?

December 17, 2009 | Emmanuel J. Candès1,2, Xiaodong Li2, Yi Ma3,4, and John Wright4
This paper addresses the problem of recovering a low-rank and a sparse matrix from their sum, where the entries of the sparse matrix can be arbitrarily corrupted. The authors propose a convex optimization problem called Principal Component Pursuit (PCP), which minimizes a weighted combination of the nuclear norm and the $\ell_1$ norm. They prove that under suitable assumptions, PCP can exactly recover both the low-rank and the sparse components. This suggests a principled approach to robust principal component analysis, as it can handle a positive fraction of arbitrarily corrupted entries. The paper discusses an algorithm for solving the optimization problem and presents applications in video surveillance and face recognition. The main result is a theorem stating that with high probability, PCP with a specific choice of the regularization parameter $\lambda$ can recover the low-rank and sparse components under certain conditions on the rank of the low-rank component and the sparsity of the sparse component. The proof relies on dual certificates and a modified version of the golfing scheme introduced by Gross for matrix completion.This paper addresses the problem of recovering a low-rank and a sparse matrix from their sum, where the entries of the sparse matrix can be arbitrarily corrupted. The authors propose a convex optimization problem called Principal Component Pursuit (PCP), which minimizes a weighted combination of the nuclear norm and the $\ell_1$ norm. They prove that under suitable assumptions, PCP can exactly recover both the low-rank and the sparse components. This suggests a principled approach to robust principal component analysis, as it can handle a positive fraction of arbitrarily corrupted entries. The paper discusses an algorithm for solving the optimization problem and presents applications in video surveillance and face recognition. The main result is a theorem stating that with high probability, PCP with a specific choice of the regularization parameter $\lambda$ can recover the low-rank and sparse components under certain conditions on the rank of the low-rank component and the sparsity of the sparse component. The proof relies on dual certificates and a modified version of the golfing scheme introduced by Gross for matrix completion.
Reach us at info@study.space
[slides and audio] Robust principal component analysis%3F