The paper provides a comprehensive survey of Vision-Language-Action (VLA) models in the field of embodied AI, which integrate vision, language, and action modalities to handle instruction-following tasks. VLA models are designed to enhance versatility, dexterity, and generalizability in complex environments, making them suitable for both controlled settings like factories and everyday tasks such as cooking and cleaning. The survey covers the evolution from unimodal models to multi-modal models, highlighting key advancements in computer vision, natural language processing, and reinforcement learning. It introduces a taxonomy of VLA models, including pretraining techniques, low-level control policies, and high-level task planners. The paper also discusses various methods for pretraining visual representations, dynamics learning, and world modeling, as well as different approaches for crafting low-level control policies and high-level task planners. The contributions of the survey include a thorough review of emerging VLA models, a detailed taxonomy, and an overview of necessary resources for training and evaluating these models. Finally, it outlines current challenges and future directions in the field, emphasizing the need to address data scarcity, enhance robot dexterity, improve generalization, and ensure robot safety.The paper provides a comprehensive survey of Vision-Language-Action (VLA) models in the field of embodied AI, which integrate vision, language, and action modalities to handle instruction-following tasks. VLA models are designed to enhance versatility, dexterity, and generalizability in complex environments, making them suitable for both controlled settings like factories and everyday tasks such as cooking and cleaning. The survey covers the evolution from unimodal models to multi-modal models, highlighting key advancements in computer vision, natural language processing, and reinforcement learning. It introduces a taxonomy of VLA models, including pretraining techniques, low-level control policies, and high-level task planners. The paper also discusses various methods for pretraining visual representations, dynamics learning, and world modeling, as well as different approaches for crafting low-level control policies and high-level task planners. The contributions of the survey include a thorough review of emerging VLA models, a detailed taxonomy, and an overview of necessary resources for training and evaluating these models. Finally, it outlines current challenges and future directions in the field, emphasizing the need to address data scarcity, enhance robot dexterity, improve generalization, and ensure robot safety.