9 Sep 2013 | Julien Mairal, Francis Bach, and Jean Ponce
This paper presents a task-driven dictionary learning framework for supervised and semi-supervised learning tasks. The authors propose a general formulation for learning dictionaries adapted to various tasks, such as classification, regression, and compressed sensing, and present an efficient algorithm for solving the corresponding optimization problem. The approach is effective in large-scale settings and is well-suited for supervised and semi-supervised learning. The framework is tested on several tasks, including handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing, demonstrating state-of-the-art results. The method involves learning a dictionary and model parameters jointly, with the dictionary being adapted to the specific task. The algorithm uses stochastic gradient descent and is shown to be efficient and effective in optimizing the task-driven formulation. The paper also discusses extensions of the framework, including the use of linear transforms and semi-supervised learning, and presents experimental results showing the effectiveness of the approach in various applications.This paper presents a task-driven dictionary learning framework for supervised and semi-supervised learning tasks. The authors propose a general formulation for learning dictionaries adapted to various tasks, such as classification, regression, and compressed sensing, and present an efficient algorithm for solving the corresponding optimization problem. The approach is effective in large-scale settings and is well-suited for supervised and semi-supervised learning. The framework is tested on several tasks, including handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing, demonstrating state-of-the-art results. The method involves learning a dictionary and model parameters jointly, with the dictionary being adapted to the specific task. The algorithm uses stochastic gradient descent and is shown to be efficient and effective in optimizing the task-driven formulation. The paper also discusses extensions of the framework, including the use of linear transforms and semi-supervised learning, and presents experimental results showing the effectiveness of the approach in various applications.