2004 | Francis R. Bach & Gert R. G. Lanckriet, Michael I. Jordan
This paper presents a novel approach to multiple kernel learning (MKL) for support vector machines (SVMs), addressing the computational challenges of solving the underlying convex optimization problem. The authors propose a dual formulation of the quadratic-constrained quadratic program (QCQP) as a second-order cone programming (SOCP) problem, enabling the application of the Sequential Minimal Optimization (SMO) algorithm. This formulation allows for efficient optimization by leveraging Moreau-Yosida regularization, which preserves the sparsity of the SVM structure and enables the use of SMO techniques.
The key contributions include the introduction of the Support Kernel Machine (SKM), a novel classification algorithm whose dual is equivalent to the MKL problem. The SKM formulation allows for the efficient optimization of kernel combinations, leading to a sparse solution that can be solved using SMO. The paper also presents theoretical bounds on the performance of the algorithm and demonstrates its effectiveness through numerical experiments on real-world datasets.
The proposed method significantly improves upon existing optimization techniques by reducing computational complexity, particularly in terms of the number of kernels and data points. The algorithm is shown to scale better than general-purpose interior point methods, making it suitable for large-scale learning tasks. The results indicate that the SKM-based approach is more efficient and effective for multiple kernel learning problems compared to traditional methods.This paper presents a novel approach to multiple kernel learning (MKL) for support vector machines (SVMs), addressing the computational challenges of solving the underlying convex optimization problem. The authors propose a dual formulation of the quadratic-constrained quadratic program (QCQP) as a second-order cone programming (SOCP) problem, enabling the application of the Sequential Minimal Optimization (SMO) algorithm. This formulation allows for efficient optimization by leveraging Moreau-Yosida regularization, which preserves the sparsity of the SVM structure and enables the use of SMO techniques.
The key contributions include the introduction of the Support Kernel Machine (SKM), a novel classification algorithm whose dual is equivalent to the MKL problem. The SKM formulation allows for the efficient optimization of kernel combinations, leading to a sparse solution that can be solved using SMO. The paper also presents theoretical bounds on the performance of the algorithm and demonstrates its effectiveness through numerical experiments on real-world datasets.
The proposed method significantly improves upon existing optimization techniques by reducing computational complexity, particularly in terms of the number of kernels and data points. The algorithm is shown to scale better than general-purpose interior point methods, making it suitable for large-scale learning tasks. The results indicate that the SKM-based approach is more efficient and effective for multiple kernel learning problems compared to traditional methods.