The paper introduces Motion-Agent, an efficient conversational framework for generating, editing, and understanding 3D human motion using large language models (LLMs). Motion-Agent leverages an open-source pre-trained LLM, MotionLLM, to bridge the gap between motion and text by encoding and quantizing motions into discrete tokens. This approach allows for lightweight fine-tuning of adapters, achieving performance comparable to diffusion models and other transformer-based methods trained from scratch. By integrating MotionLLM with GPT-4, Motion-Agent can generate complex motion sequences through multi-turn conversations, a capability that previous models have struggled to achieve. The framework supports a wide range of motion-language tasks, offering versatile capabilities for generating and customizing human motion through interactive conversational exchanges. The paper also includes a detailed evaluation of Motion-Agent and MotionLLM, demonstrating their effectiveness in various tasks and comparing them with state-of-the-art methods.The paper introduces Motion-Agent, an efficient conversational framework for generating, editing, and understanding 3D human motion using large language models (LLMs). Motion-Agent leverages an open-source pre-trained LLM, MotionLLM, to bridge the gap between motion and text by encoding and quantizing motions into discrete tokens. This approach allows for lightweight fine-tuning of adapters, achieving performance comparable to diffusion models and other transformer-based methods trained from scratch. By integrating MotionLLM with GPT-4, Motion-Agent can generate complex motion sequences through multi-turn conversations, a capability that previous models have struggled to achieve. The framework supports a wide range of motion-language tasks, offering versatile capabilities for generating and customizing human motion through interactive conversational exchanges. The paper also includes a detailed evaluation of Motion-Agent and MotionLLM, demonstrating their effectiveness in various tasks and comparing them with state-of-the-art methods.