This chapter provides a selective overview of multi-agent reinforcement learning (MARL), focusing on algorithms with theoretical analysis. It discusses the challenges in MARL theory, including non-unique learning goals, non-stationarity, scalability issues, and various information structures. The chapter reviews MARL algorithms within two representative frameworks: Markov/stochastic games and extensive-form games, and highlights recent advances in these areas. It also introduces several significant applications of these algorithms, such as in cyber-physical systems, finance, sensor networks, and social science. The chapter emphasizes new angles and taxonomies in MARL theory, including learning in extensive-form games, decentralized MARL with networked agents, MARL in the mean-field regime, and the (non-)convergence of policy-based methods for learning in games. The chapter aims to identify fruitful future research directions on theoretical studies of MARL and to stimulate further research in this exciting and challenging area. It also provides a roadmap of the chapter, including background on MARL, challenges in MARL theory, review of MARL algorithms with theoretical guarantees, recent successes of MARL, and open research directions.This chapter provides a selective overview of multi-agent reinforcement learning (MARL), focusing on algorithms with theoretical analysis. It discusses the challenges in MARL theory, including non-unique learning goals, non-stationarity, scalability issues, and various information structures. The chapter reviews MARL algorithms within two representative frameworks: Markov/stochastic games and extensive-form games, and highlights recent advances in these areas. It also introduces several significant applications of these algorithms, such as in cyber-physical systems, finance, sensor networks, and social science. The chapter emphasizes new angles and taxonomies in MARL theory, including learning in extensive-form games, decentralized MARL with networked agents, MARL in the mean-field regime, and the (non-)convergence of policy-based methods for learning in games. The chapter aims to identify fruitful future research directions on theoretical studies of MARL and to stimulate further research in this exciting and challenging area. It also provides a roadmap of the chapter, including background on MARL, challenges in MARL theory, review of MARL algorithms with theoretical guarantees, recent successes of MARL, and open research directions.