A Riemannian Framework for Tensor Computing

A Riemannian Framework for Tensor Computing

July 2004 | Xavier Pennec, Pierre Fillard, Nicholas Ayache
This paper presents a Riemannian framework for tensor computing, focusing on positive definite symmetric matrices (tensors). The authors propose an affine-invariant Riemannian metric on the space of tensors, which leads to strong theoretical properties. This metric transforms the cone of positive definite symmetric matrices into a regular manifold of constant curvature without boundaries, where null eigenvalues are at infinity. The paper demonstrates that this metric enables the unique definition of geodesics between tensors and the mean of a set of tensors. It also shows that the Riemannian metric can be used to generalize many important geometric data processing algorithms, such as interpolation, filtering, diffusion, and restoration of missing data. The authors provide intrinsic numerical schemes for computing the gradient and Laplacian operators and propose least-squares criteria based on the invariant Riemannian distance to ensure fidelity to the data. The paper covers topics such as exponential and logarithm maps, gradient descent, PDE evolution, and statistical operations on tensors, providing a comprehensive framework for tensor-valued data processing.This paper presents a Riemannian framework for tensor computing, focusing on positive definite symmetric matrices (tensors). The authors propose an affine-invariant Riemannian metric on the space of tensors, which leads to strong theoretical properties. This metric transforms the cone of positive definite symmetric matrices into a regular manifold of constant curvature without boundaries, where null eigenvalues are at infinity. The paper demonstrates that this metric enables the unique definition of geodesics between tensors and the mean of a set of tensors. It also shows that the Riemannian metric can be used to generalize many important geometric data processing algorithms, such as interpolation, filtering, diffusion, and restoration of missing data. The authors provide intrinsic numerical schemes for computing the gradient and Laplacian operators and propose least-squares criteria based on the invariant Riemannian distance to ensure fidelity to the data. The paper covers topics such as exponential and logarithm maps, gradient descent, PDE evolution, and statistical operations on tensors, providing a comprehensive framework for tensor-valued data processing.
Reach us at info@study.space