This survey provides a comprehensive review of large language model (LLM)-enhanced reinforcement learning (RL), summarizing its characteristics, challenges, and potential applications. The paper introduces a structured taxonomy to categorize LLMs' roles in RL, including information processor, reward designer, decision-maker, and generator. It discusses how LLMs can address key challenges in RL, such as sample inefficiency, reward function design, generalization, and natural language understanding. The survey highlights the integration of LLMs into RL paradigms, emphasizing their capabilities in natural language understanding, reasoning, and task planning. It also explores the potential of LLMs in multi-modal RL, where they can process visual and language data to enhance RL performance. The paper outlines the framework of LLM-enhanced RL, which leverages LLMs to process information, design rewards, make decisions, and generate policies. It discusses the characteristics of LLM-enhanced RL, including multi-modal information understanding, multi-task learning, improved sample efficiency, and reward signal generation. The survey also identifies future research directions, such as improving the alignment of LLM-generated rewards with human intentions, enhancing the generalization of reward functions, and developing more efficient and effective methods for integrating LLMs into RL. Overall, the paper aims to clarify the research scope and directions for future studies in LLM-enhanced RL.This survey provides a comprehensive review of large language model (LLM)-enhanced reinforcement learning (RL), summarizing its characteristics, challenges, and potential applications. The paper introduces a structured taxonomy to categorize LLMs' roles in RL, including information processor, reward designer, decision-maker, and generator. It discusses how LLMs can address key challenges in RL, such as sample inefficiency, reward function design, generalization, and natural language understanding. The survey highlights the integration of LLMs into RL paradigms, emphasizing their capabilities in natural language understanding, reasoning, and task planning. It also explores the potential of LLMs in multi-modal RL, where they can process visual and language data to enhance RL performance. The paper outlines the framework of LLM-enhanced RL, which leverages LLMs to process information, design rewards, make decisions, and generate policies. It discusses the characteristics of LLM-enhanced RL, including multi-modal information understanding, multi-task learning, improved sample efficiency, and reward signal generation. The survey also identifies future research directions, such as improving the alignment of LLM-generated rewards with human intentions, enhancing the generalization of reward functions, and developing more efficient and effective methods for integrating LLMs into RL. Overall, the paper aims to clarify the research scope and directions for future studies in LLM-enhanced RL.