This chapter provides a selective overview of multi-agent reinforcement learning (MARL), focusing on algorithms with theoretical analysis. It highlights the challenges in MARL, such as multi-dimensional learning goals, non-stationarity, scalability issues, and various information structures. The chapter reviews MARL algorithms within two representative frameworks: Markov/stochastic games and extensive-form games, addressing fully cooperative, fully competitive, and mixed settings. It also introduces significant applications and discusses new angles and taxonomies in MARL theory, including learning in extensive-form games, decentralized MARL with networked agents, mean-field regime, and convergence of policy-based methods. The goal is to identify future research directions and stimulate further theoretical studies in MARL.This chapter provides a selective overview of multi-agent reinforcement learning (MARL), focusing on algorithms with theoretical analysis. It highlights the challenges in MARL, such as multi-dimensional learning goals, non-stationarity, scalability issues, and various information structures. The chapter reviews MARL algorithms within two representative frameworks: Markov/stochastic games and extensive-form games, addressing fully cooperative, fully competitive, and mixed settings. It also introduces significant applications and discusses new angles and taxonomies in MARL theory, including learning in extensive-form games, decentralized MARL with networked agents, mean-field regime, and convergence of policy-based methods. The goal is to identify future research directions and stimulate further theoretical studies in MARL.