Elad Hazan's *Introduction to Online Convex Optimization* Second Edition provides a comprehensive overview of online convex optimization (OCO), a framework for sequential decision-making under uncertainty. The book is structured as an advanced textbook for graduate-level courses and serves as a reference for researchers in the intersection of optimization and machine learning. It covers foundational concepts, algorithms, and applications of OCO, including prediction from expert advice, online spam filtering, shortest path problems, portfolio selection, and matrix completion. The text introduces key algorithms such as online gradient descent, the Hedge algorithm, and the Online Newton Step, while also exploring advanced topics like regularization, bandit convex optimization, and adaptive regret. The book emphasizes the theoretical underpinnings of OCO, including regret analysis, and provides a rigorous treatment of convex optimization techniques. It also includes exercises and references to further reading, making it a valuable resource for both students and researchers in machine learning and optimization. The second edition expands on previous material, adding new chapters on adaptive regret, boosting, and Blackwell approachability, and includes updated analyses and corrections. The book is designed to be self-contained, with a clear structure that supports both teaching and independent study.Elad Hazan's *Introduction to Online Convex Optimization* Second Edition provides a comprehensive overview of online convex optimization (OCO), a framework for sequential decision-making under uncertainty. The book is structured as an advanced textbook for graduate-level courses and serves as a reference for researchers in the intersection of optimization and machine learning. It covers foundational concepts, algorithms, and applications of OCO, including prediction from expert advice, online spam filtering, shortest path problems, portfolio selection, and matrix completion. The text introduces key algorithms such as online gradient descent, the Hedge algorithm, and the Online Newton Step, while also exploring advanced topics like regularization, bandit convex optimization, and adaptive regret. The book emphasizes the theoretical underpinnings of OCO, including regret analysis, and provides a rigorous treatment of convex optimization techniques. It also includes exercises and references to further reading, making it a valuable resource for both students and researchers in machine learning and optimization. The second edition expands on previous material, adding new chapters on adaptive regret, boosting, and Blackwell approachability, and includes updated analyses and corrections. The book is designed to be self-contained, with a clear structure that supports both teaching and independent study.