Kernels for Vector-Valued Functions: a Review

Kernels for Vector-Valued Functions: a Review

April 17, 2012 | Mauricio A. Álvarez†, Lorenzo Rosasco‡†, Neil D. Lawrence*•○
This paper reviews methods for designing and learning valid kernel functions for multiple outputs, focusing on the connection between probabilistic and functional approaches. Kernel methods are central in machine learning, providing a natural choice for the hypothesis space and regularization functional through reproducing kernel Hilbert spaces (RKHS). From a probabilistic perspective, they are key in Gaussian processes, where the kernel is the covariance function. Traditional kernel methods have been used for scalar output problems, but recent interest has grown in methods for multiple outputs, motivated by frameworks like multitask learning. The paper discusses different approaches to designing multiple output kernels, emphasizing the connection between Bayesian and regularization frameworks. It covers the extension of kernel methods to vector-valued functions, including reproducing kernels for vector-valued functions and Gaussian processes for vector-valued functions. It also explores separable kernels and sum of separable kernels, as well as extensions beyond separable kernels, such as invariant kernels and process convolutions. The paper addresses parameter estimation and computational complexity, and discusses applications of multivariate kernels. It concludes with a discussion of the connections between Bayesian and regularization approaches, highlighting the importance of kernels in both perspectives. The paper aims to provide a comprehensive overview of the state of the art in multi-output/multi-task learning using kernel methods and Gaussian processes.This paper reviews methods for designing and learning valid kernel functions for multiple outputs, focusing on the connection between probabilistic and functional approaches. Kernel methods are central in machine learning, providing a natural choice for the hypothesis space and regularization functional through reproducing kernel Hilbert spaces (RKHS). From a probabilistic perspective, they are key in Gaussian processes, where the kernel is the covariance function. Traditional kernel methods have been used for scalar output problems, but recent interest has grown in methods for multiple outputs, motivated by frameworks like multitask learning. The paper discusses different approaches to designing multiple output kernels, emphasizing the connection between Bayesian and regularization frameworks. It covers the extension of kernel methods to vector-valued functions, including reproducing kernels for vector-valued functions and Gaussian processes for vector-valued functions. It also explores separable kernels and sum of separable kernels, as well as extensions beyond separable kernels, such as invariant kernels and process convolutions. The paper addresses parameter estimation and computational complexity, and discusses applications of multivariate kernels. It concludes with a discussion of the connections between Bayesian and regularization approaches, highlighting the importance of kernels in both perspectives. The paper aims to provide a comprehensive overview of the state of the art in multi-output/multi-task learning using kernel methods and Gaussian processes.
Reach us at info@study.space
Understanding Kernels for Vector-Valued Functions%3A a Review