Universal Kernels for Multi-Task Learning

Universal Kernels for Multi-Task Learning

XXXX | Andrea Caponnetto, Charles A. Micchelli, Massimiliano Pontil, Yiming Ying
This paper investigates the conditions under which operator-valued kernels are universal in the context of multi-task learning. A universal kernel is one that can uniformly approximate any continuous function on a compact subset of the input space. The paper focuses on reproducing kernel Hilbert spaces (RKHS) of vector-valued functions, where the kernel takes values as operators on a Hilbert space Y. The authors derive conditions for the kernel to be universal, characterize such kernels, and provide examples of practical importance. The paper begins by introducing the concept of operator-valued kernels and their properties. It then discusses the density of the RKHS in the space of continuous functions, which is essential for universal approximation. The authors show that the density of the RKHS is equivalent to the density of the feature representation associated with the kernel. This equivalence is established through a series of lemmas and theorems, including a key result that the closure of the RKHS in the space of continuous functions is the same as the closure of the space generated by the feature map. The paper also provides an alternate proof of this result using the concept of vector measures, which allows for a more straightforward verification of universality through the feature representation rather than directly through the kernel sections. The authors then present several examples of operator-valued kernels, including those derived from scalar kernels and operators, and discuss their implications for multi-task learning. The paper concludes by highlighting the importance of the density property in both practical and theoretical contexts. It shows that the density of the RKHS ensures the universal consistency of learning algorithms based on RKHS. The results are applied to various examples, demonstrating the flexibility and power of operator-valued kernels in modeling complex relationships between tasks in multi-task learning.This paper investigates the conditions under which operator-valued kernels are universal in the context of multi-task learning. A universal kernel is one that can uniformly approximate any continuous function on a compact subset of the input space. The paper focuses on reproducing kernel Hilbert spaces (RKHS) of vector-valued functions, where the kernel takes values as operators on a Hilbert space Y. The authors derive conditions for the kernel to be universal, characterize such kernels, and provide examples of practical importance. The paper begins by introducing the concept of operator-valued kernels and their properties. It then discusses the density of the RKHS in the space of continuous functions, which is essential for universal approximation. The authors show that the density of the RKHS is equivalent to the density of the feature representation associated with the kernel. This equivalence is established through a series of lemmas and theorems, including a key result that the closure of the RKHS in the space of continuous functions is the same as the closure of the space generated by the feature map. The paper also provides an alternate proof of this result using the concept of vector measures, which allows for a more straightforward verification of universality through the feature representation rather than directly through the kernel sections. The authors then present several examples of operator-valued kernels, including those derived from scalar kernels and operators, and discuss their implications for multi-task learning. The paper concludes by highlighting the importance of the density property in both practical and theoretical contexts. It shows that the density of the RKHS ensures the universal consistency of learning algorithms based on RKHS. The results are applied to various examples, demonstrating the flexibility and power of operator-valued kernels in modeling complex relationships between tasks in multi-task learning.
Reach us at info@study.space
[slides and audio] Regularized multi--task learning