14 Dec 2016 | Nicholas D. Sidiropoulos, Fellow, IEEE, Lieven De Lathauwer, Fellow, IEEE, Xiao Fu, Member, IEEE, Kejun Huang, Student Member, IEEE, Evangelos E. Papalexakis, and Christos Faloutsos
This article provides an overview of tensor decomposition in signal processing and machine learning. Tensors, or multi-way arrays, are generalizations of matrices and have become increasingly important in these fields. The paper discusses fundamental concepts, including tensor rank, rank decomposition, and various factorization models such as CPD, PARAFAC, Tucker, and HOSVD. It also covers algorithmic approaches, including alternating optimization, stochastic gradient, and statistical performance analysis. The article emphasizes the importance of understanding uniqueness, identifiability, and the relationship between tensor rank and multilinear rank. It highlights applications in signal processing (e.g., source separation, harmonic retrieval) and machine learning (e.g., collaborative filtering, topic modeling). The paper also addresses challenges such as the NP-hard nature of determining tensor rank and the need for constraints to ensure well-posedness in low-rank approximation. It discusses the differences between signal processing and machine learning perspectives on tensor decomposition, noting that while signal processing focuses on separability, machine learning emphasizes interpretability of latent space dimensions. The article concludes with a roadmap of the content, including matrix preliminaries, tensor rank and decomposition, algorithmic aspects, statistical performance analysis, and applications. It also references key concepts such as typical and generic ranks, border rank, and the role of constraints in ensuring meaningful tensor decompositions.This article provides an overview of tensor decomposition in signal processing and machine learning. Tensors, or multi-way arrays, are generalizations of matrices and have become increasingly important in these fields. The paper discusses fundamental concepts, including tensor rank, rank decomposition, and various factorization models such as CPD, PARAFAC, Tucker, and HOSVD. It also covers algorithmic approaches, including alternating optimization, stochastic gradient, and statistical performance analysis. The article emphasizes the importance of understanding uniqueness, identifiability, and the relationship between tensor rank and multilinear rank. It highlights applications in signal processing (e.g., source separation, harmonic retrieval) and machine learning (e.g., collaborative filtering, topic modeling). The paper also addresses challenges such as the NP-hard nature of determining tensor rank and the need for constraints to ensure well-posedness in low-rank approximation. It discusses the differences between signal processing and machine learning perspectives on tensor decomposition, noting that while signal processing focuses on separability, machine learning emphasizes interpretability of latent space dimensions. The article concludes with a roadmap of the content, including matrix preliminaries, tensor rank and decomposition, algorithmic aspects, statistical performance analysis, and applications. It also references key concepts such as typical and generic ranks, border rank, and the role of constraints in ensuring meaningful tensor decompositions.