The Annals of Statistics, 2011, Vol. 39, No. 3, 1335–1371 DOI: 10.1214/11-AOS878 © Institute of Mathematical Statistics, 2011 # THE SOLUTION PATH OF THE GENERALIZED LASSO BY RYAN J. TIBSHIRANI AND JONATHAN TAYLOR Stanford University We present a path algorithm for the generalized lasso problem. This problem penalizes the $ \ell_{1} $ norm of a matrix D times the coefficient vector, and has a wide range of applications, dictated by the choice of D. Our algorithm is based on solving the dual of the generalized lasso, which greatly facilitates computation of the path. For D=I (the usual lasso), we draw a connection between our approach and the well-known LARS algorithm. For an arbitrary D, we derive an unbiased estimate of the degrees of freedom of the generalized lasso fit. This estimate turns out to be quite intuitive in many applications. The generalized lasso is a statistical method that extends the lasso by allowing for more complex structures in the coefficient vector. It penalizes the $ \ell_{1} $ norm of a matrix D times the coefficient vector, where D is a specified penalty matrix. This formulation allows for a wide range of applications, depending on the choice of D. The paper presents a path algorithm for the generalized lasso, which is based on solving the dual of the generalized lasso problem. This approach is particularly effective for the case where D is the identity matrix, as it connects to the well-known LARS algorithm. For arbitrary D, the algorithm provides an unbiased estimate of the degrees of freedom of the generalized lasso fit, which is useful for statistical inference. The paper is organized as follows. We begin in Section 2 by motivating the use of a penalty matrix D, offering several examples of problems that fit into this framework. Section 3 explains that some instances of the generalized lasso can be transformed into a regular lasso problem, but many cannot, emphasizing the need for a new path approach. In Section 4, we derive the Lagrange dual of $ (2) $ , which serves as the jumping point for our algorithm and all of the work that follows. For the sake of clarity, we build up the algorithm over the next 3 sections. Sections 5 and 6 consider the case X = I. In Section 5, we assume that D is the 1-dimensional fused lasso matrix, in which case our path algorithm takes an especially simple (and intuitive) form. In Section 6, we give the path algorithm for a general penalty matrix D, which requires adding only one step in the iterative loop. Section 7 extends the algorithm to the case ofThe Annals of Statistics, 2011, Vol. 39, No. 3, 1335–1371 DOI: 10.1214/11-AOS878 © Institute of Mathematical Statistics, 2011 # THE SOLUTION PATH OF THE GENERALIZED LASSO BY RYAN J. TIBSHIRANI AND JONATHAN TAYLOR Stanford University We present a path algorithm for the generalized lasso problem. This problem penalizes the $ \ell_{1} $ norm of a matrix D times the coefficient vector, and has a wide range of applications, dictated by the choice of D. Our algorithm is based on solving the dual of the generalized lasso, which greatly facilitates computation of the path. For D=I (the usual lasso), we draw a connection between our approach and the well-known LARS algorithm. For an arbitrary D, we derive an unbiased estimate of the degrees of freedom of the generalized lasso fit. This estimate turns out to be quite intuitive in many applications. The generalized lasso is a statistical method that extends the lasso by allowing for more complex structures in the coefficient vector. It penalizes the $ \ell_{1} $ norm of a matrix D times the coefficient vector, where D is a specified penalty matrix. This formulation allows for a wide range of applications, depending on the choice of D. The paper presents a path algorithm for the generalized lasso, which is based on solving the dual of the generalized lasso problem. This approach is particularly effective for the case where D is the identity matrix, as it connects to the well-known LARS algorithm. For arbitrary D, the algorithm provides an unbiased estimate of the degrees of freedom of the generalized lasso fit, which is useful for statistical inference. The paper is organized as follows. We begin in Section 2 by motivating the use of a penalty matrix D, offering several examples of problems that fit into this framework. Section 3 explains that some instances of the generalized lasso can be transformed into a regular lasso problem, but many cannot, emphasizing the need for a new path approach. In Section 4, we derive the Lagrange dual of $ (2) $ , which serves as the jumping point for our algorithm and all of the work that follows. For the sake of clarity, we build up the algorithm over the next 3 sections. Sections 5 and 6 consider the case X = I. In Section 5, we assume that D is the 1-dimensional fused lasso matrix, in which case our path algorithm takes an especially simple (and intuitive) form. In Section 6, we give the path algorithm for a general penalty matrix D, which requires adding only one step in the iterative loop. Section 7 extends the algorithm to the case of