11 Jul 2012 | Patrick Danaher, Pei Wang, Daniela M. Witten
The paper introduces the *joint graphical lasso* (JGL), a method for estimating multiple Gaussian graphical models from high-dimensional data with observations belonging to distinct classes. The JGL borrows strength across classes to estimate graphical models that share certain characteristics, such as the locations or weights of nonzero edges. The approach is based on maximizing a penalized log likelihood, using generalized fused lasso or group lasso penalties, and implementing an alternating directions method of multipliers (ADMM) algorithm to solve the corresponding convex optimization problems. The performance of the JGL is demonstrated through simulated and real data examples, showing its ability to estimate sparse and similar network structures across classes. The paper also discusses the computational efficiency of the JGL, particularly in high-dimensional settings, and compares it with existing methods, highlighting its advantages in terms of speed and accuracy.The paper introduces the *joint graphical lasso* (JGL), a method for estimating multiple Gaussian graphical models from high-dimensional data with observations belonging to distinct classes. The JGL borrows strength across classes to estimate graphical models that share certain characteristics, such as the locations or weights of nonzero edges. The approach is based on maximizing a penalized log likelihood, using generalized fused lasso or group lasso penalties, and implementing an alternating directions method of multipliers (ADMM) algorithm to solve the corresponding convex optimization problems. The performance of the JGL is demonstrated through simulated and real data examples, showing its ability to estimate sparse and similar network structures across classes. The paper also discusses the computational efficiency of the JGL, particularly in high-dimensional settings, and compares it with existing methods, highlighting its advantages in terms of speed and accuracy.