PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain

PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain

21 Feb 2024 | Liang Chen, Yichi Zhang, Shuhuai Ren, Haozhe Zhao, Zefan Cai, Yuchi Wang, Peiyi Wang, Xiangdi Meng, Tianyu Liu, Baobao Chang
PCA-Bench is a multimodal decision-making benchmark designed to evaluate the integrated capabilities of Multimodal Large Language Models (MLLMs). It introduces three complex scenarios—autonomous driving, domestic robotics, and open-world games—to assess models' ability to integrate perception, cognition, and action in a reasoning chain. The benchmark features error localization capabilities, enhancing the reliability of deploying MLLMs. PCA-Eval, an automatic evaluation protocol, is proposed to balance accuracy and efficiency, and 10 prevalent MLLMs are assessed. The results reveal significant performance disparities between open-source models and powerful proprietary models like GPT-4 Vision. To address this, Embodied Instruction Evolution (EIE) is introduced, a framework for synthesizing instruction tuning examples in multimodal embodied environments. EIE generates 7,510 training examples, enhancing the performance of open-source MLLMs, occasionally surpassing GPT-4 Vision (+3% in decision accuracy). The findings suggest that robust MLLMs like GPT4-Vision show promise for decision-making in embodied agents, opening new avenues for MLLM research.PCA-Bench is a multimodal decision-making benchmark designed to evaluate the integrated capabilities of Multimodal Large Language Models (MLLMs). It introduces three complex scenarios—autonomous driving, domestic robotics, and open-world games—to assess models' ability to integrate perception, cognition, and action in a reasoning chain. The benchmark features error localization capabilities, enhancing the reliability of deploying MLLMs. PCA-Eval, an automatic evaluation protocol, is proposed to balance accuracy and efficiency, and 10 prevalent MLLMs are assessed. The results reveal significant performance disparities between open-source models and powerful proprietary models like GPT-4 Vision. To address this, Embodied Instruction Evolution (EIE) is introduced, a framework for synthesizing instruction tuning examples in multimodal embodied environments. EIE generates 7,510 training examples, enhancing the performance of open-source MLLMs, occasionally surpassing GPT-4 Vision (+3% in decision accuracy). The findings suggest that robust MLLMs like GPT4-Vision show promise for decision-making in embodied agents, opening new avenues for MLLM research.
Reach us at info@study.space