The paper introduces Ch³Ef, a comprehensive dataset and evaluation strategy for assessing the alignment of Multimodal Large Language Models (MLLMs) with human values, defined by the principles of being helpful, honest, and harmless (hhh). Despite MLLMs' advancements in perception and reasoning tasks, their alignment with hhh principles remains underexplored due to the complexity of defining these principles in the visual domain and the challenges in collecting relevant data. Ch³Ef dataset contains 1002 human-annotated samples across 12 domains and 46 tasks, covering A1 (semantics), A2 (logic), and A3 (human values). The evaluation strategy supports assessments across various scenarios and perspectives, enabling a unified evaluation of MLLMs' capabilities and limitations. The results reveal significant trade-offs in visual capabilities, domain-specific challenges, and the need for enhanced alignment with human values. The study also highlights the importance of balancing safety and engagement in AI interactions and suggests strategies for improving human-value alignment in MLLMs. The dataset and evaluation codebase are available for further research.The paper introduces Ch³Ef, a comprehensive dataset and evaluation strategy for assessing the alignment of Multimodal Large Language Models (MLLMs) with human values, defined by the principles of being helpful, honest, and harmless (hhh). Despite MLLMs' advancements in perception and reasoning tasks, their alignment with hhh principles remains underexplored due to the complexity of defining these principles in the visual domain and the challenges in collecting relevant data. Ch³Ef dataset contains 1002 human-annotated samples across 12 domains and 46 tasks, covering A1 (semantics), A2 (logic), and A3 (human values). The evaluation strategy supports assessments across various scenarios and perspectives, enabling a unified evaluation of MLLMs' capabilities and limitations. The results reveal significant trade-offs in visual capabilities, domain-specific challenges, and the need for enhanced alignment with human values. The study also highlights the importance of balancing safety and engagement in AI interactions and suggests strategies for improving human-value alignment in MLLMs. The dataset and evaluation codebase are available for further research.