PCA-Bench is a multimodal decision-making benchmark designed to evaluate the integrated capabilities of Multimodal Large Language Models (MLLMs). It introduces three complex scenarios—autonomous driving, domestic robotics, and open-world games—to assess models' ability to integrate perception, cognition, and action in a reasoning chain. The benchmark features error localization capabilities, enhancing the reliability of deploying MLLMs. PCA-Eval, an automatic evaluation protocol, is proposed to balance accuracy and efficiency, and 10 prevalent MLLMs are assessed. The results reveal significant performance disparities between open-source models and powerful proprietary models like GPT-4 Vision. To address this, Embodied Instruction Evolution (EIE) is introduced, a framework for synthesizing instruction tuning examples in multimodal embodied environments. EIE generates 7,510 training examples, enhancing the performance of open-source MLLMs, occasionally surpassing GPT-4 Vision (+3% in decision accuracy). The findings suggest that robust MLLMs like GPT4-Vision show promise for decision-making in embodied agents, opening new avenues for MLLM research.PCA-Bench is a multimodal decision-making benchmark designed to evaluate the integrated capabilities of Multimodal Large Language Models (MLLMs). It introduces three complex scenarios—autonomous driving, domestic robotics, and open-world games—to assess models' ability to integrate perception, cognition, and action in a reasoning chain. The benchmark features error localization capabilities, enhancing the reliability of deploying MLLMs. PCA-Eval, an automatic evaluation protocol, is proposed to balance accuracy and efficiency, and 10 prevalent MLLMs are assessed. The results reveal significant performance disparities between open-source models and powerful proprietary models like GPT-4 Vision. To address this, Embodied Instruction Evolution (EIE) is introduced, a framework for synthesizing instruction tuning examples in multimodal embodied environments. EIE generates 7,510 training examples, enhancing the performance of open-source MLLMs, occasionally surpassing GPT-4 Vision (+3% in decision accuracy). The findings suggest that robust MLLMs like GPT4-Vision show promise for decision-making in embodied agents, opening new avenues for MLLM research.