2004 | Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, Yasemin Altun
This paper addresses the challenge of learning from complex outputs, such as multiple dependent variables and structured output spaces, by generalizing multiclass Support Vector Machine (SVM) learning. The authors propose a formulation that involves features extracted jointly from inputs and outputs, which is solved efficiently using a cutting plane algorithm that leverages the sparseness and structural decomposition of the problem. The method is demonstrated on various tasks, including supervised grammar learning, named-entity recognition, taxonomic text classification, and sequence alignment, showing its versatility and effectiveness. The paper also discusses the theoretical foundations of the approach, including the dual programs and the convergence analysis of the proposed algorithm. Empirical results on multiple datasets validate the method's performance and highlight its advantages over conventional approaches.This paper addresses the challenge of learning from complex outputs, such as multiple dependent variables and structured output spaces, by generalizing multiclass Support Vector Machine (SVM) learning. The authors propose a formulation that involves features extracted jointly from inputs and outputs, which is solved efficiently using a cutting plane algorithm that leverages the sparseness and structural decomposition of the problem. The method is demonstrated on various tasks, including supervised grammar learning, named-entity recognition, taxonomic text classification, and sequence alignment, showing its versatility and effectiveness. The paper also discusses the theoretical foundations of the approach, including the dual programs and the convergence analysis of the proposed algorithm. Empirical results on multiple datasets validate the method's performance and highlight its advantages over conventional approaches.