This survey provides a comprehensive review of the emerging field of *LLM-enhanced Reinforcement Learning (RL)*, aiming to clarify the research scope and directions for future studies. The authors define *LLM-enhanced RL* as methods that utilize the multi-modal information processing, generating, reasoning, and other capabilities of pre-trained, knowledge-inherent AI models to assist the RL paradigm. They propose a structured taxonomy to categorize LLM functionalities in RL, including four roles: information processor, reward designer, decision-maker, and generator. Each role is detailed with methodologies, specific RL challenges addressed, and future directions. The survey highlights the potential applications, opportunities, and challenges of LLM-enhanced RL, emphasizing the need for a unified framework to integrate LLMs into the RL paradigm. The contributions of the survey include defining the *LLM-enhanced RL* paradigm, proposing a unified taxonomy, and reviewing algorithmic advancements in each LLM role. The survey also discusses the characteristics and characteristics of LLM-enhanced RL, such as multi-modal information understanding, multi-task learning, improved sample efficiency, long-horizon handling, and reward signal generation. Finally, it outlines potential future research directions, focusing on improving the generalization and adaptability of LLM-generated rewards and enhancing the effectiveness of explicit reward code generation.This survey provides a comprehensive review of the emerging field of *LLM-enhanced Reinforcement Learning (RL)*, aiming to clarify the research scope and directions for future studies. The authors define *LLM-enhanced RL* as methods that utilize the multi-modal information processing, generating, reasoning, and other capabilities of pre-trained, knowledge-inherent AI models to assist the RL paradigm. They propose a structured taxonomy to categorize LLM functionalities in RL, including four roles: information processor, reward designer, decision-maker, and generator. Each role is detailed with methodologies, specific RL challenges addressed, and future directions. The survey highlights the potential applications, opportunities, and challenges of LLM-enhanced RL, emphasizing the need for a unified framework to integrate LLMs into the RL paradigm. The contributions of the survey include defining the *LLM-enhanced RL* paradigm, proposing a unified taxonomy, and reviewing algorithmic advancements in each LLM role. The survey also discusses the characteristics and characteristics of LLM-enhanced RL, such as multi-modal information understanding, multi-task learning, improved sample efficiency, long-horizon handling, and reward signal generation. Finally, it outlines potential future research directions, focusing on improving the generalization and adaptability of LLM-generated rewards and enhancing the effectiveness of explicit reward code generation.