This paper introduces and analyzes new methods for solving large-scale optimization problems, which are characterized by the high computational cost of full-dimensional vector operations. The proposed methods are based on random partial updates of decision variables and are designed to be more efficient than standard deterministic algorithms. The author proves global convergence rates for these methods, showing that they can outperform worst-case bounds for deterministic algorithms in certain classes of objective functions. The paper presents both constrained and unconstrained versions of the method, as well as an accelerated variant. Numerical tests confirm the high efficiency of the proposed techniques on very large-scale problems.
The introduction discusses the motivation behind the development of these methods, highlighting the limitations of traditional coordinate descent methods, such as theoretical justification and computational complexity. The paper then delves into the theoretical analysis of the Random Coordinate Descent Method (RCDM), proving its expected convergence rate and demonstrating its effectiveness for strongly convex functions. The paper also explores the application of RCDM to constrained optimization problems and presents an accelerated version of the method. Finally, the paper discusses implementation details, including dynamic adjustment of Lipschitz constants and efficient random coordinate generation, and provides numerical results to support the theoretical findings.This paper introduces and analyzes new methods for solving large-scale optimization problems, which are characterized by the high computational cost of full-dimensional vector operations. The proposed methods are based on random partial updates of decision variables and are designed to be more efficient than standard deterministic algorithms. The author proves global convergence rates for these methods, showing that they can outperform worst-case bounds for deterministic algorithms in certain classes of objective functions. The paper presents both constrained and unconstrained versions of the method, as well as an accelerated variant. Numerical tests confirm the high efficiency of the proposed techniques on very large-scale problems.
The introduction discusses the motivation behind the development of these methods, highlighting the limitations of traditional coordinate descent methods, such as theoretical justification and computational complexity. The paper then delves into the theoretical analysis of the Random Coordinate Descent Method (RCDM), proving its expected convergence rate and demonstrating its effectiveness for strongly convex functions. The paper also explores the application of RCDM to constrained optimization problems and presents an accelerated version of the method. Finally, the paper discusses implementation details, including dynamic adjustment of Lipschitz constants and efficient random coordinate generation, and provides numerical results to support the theoretical findings.