This paper introduces a large multi-modality model-assisted AI-generated image quality assessment (MA-AGIQA) framework to address the limitations of traditional deep neural network (DNN)-based image quality assessment (IQA) models in evaluating AI-generated images (AGIs). Traditional DNN-based IQA models struggle to capture complex semantic features and coherence in AGIs, which are often generated without degradation from photography equipment or techniques. MA-AGIQA integrates a large multi-modality model (LMM) with traditional DNN-based IQA models to enhance semantic understanding and quality assessment of AGIs. The LMM, mPLUG-Owl2, is used to extract fine-grained semantic features through carefully designed prompts, while a mixture of experts (MoE) structure dynamically fuses these features with quality-aware features extracted by traditional DNN-based IQA models. Comprehensive experiments on two AI-generated content datasets and two traditional IQA datasets show that MA-AGIQA achieves state-of-the-art performance, demonstrating superior generalization capabilities in assessing the quality of AGIs. The model outperforms existing methods in terms of Spearman's Rank-Order Correlation Coefficient (SRCC) and Pearson's Linear Correlation Coefficient (PLCC), achieving significant improvements on both AGIQA-3k and AIGCQA-20k datasets. The results highlight the importance of incorporating semantic information into IQA models for accurate assessment of AGIs. The MA-AGIQA framework is designed to dynamically integrate fine-grained semantic features with quality-aware features, enabling effective handling of the varied quality aspects of AGIs. The model's performance is validated through extensive ablation studies, which confirm the effectiveness of each component in the framework. The study also emphasizes the potential of LMMs in enhancing the quality assessment of AI-generated images.This paper introduces a large multi-modality model-assisted AI-generated image quality assessment (MA-AGIQA) framework to address the limitations of traditional deep neural network (DNN)-based image quality assessment (IQA) models in evaluating AI-generated images (AGIs). Traditional DNN-based IQA models struggle to capture complex semantic features and coherence in AGIs, which are often generated without degradation from photography equipment or techniques. MA-AGIQA integrates a large multi-modality model (LMM) with traditional DNN-based IQA models to enhance semantic understanding and quality assessment of AGIs. The LMM, mPLUG-Owl2, is used to extract fine-grained semantic features through carefully designed prompts, while a mixture of experts (MoE) structure dynamically fuses these features with quality-aware features extracted by traditional DNN-based IQA models. Comprehensive experiments on two AI-generated content datasets and two traditional IQA datasets show that MA-AGIQA achieves state-of-the-art performance, demonstrating superior generalization capabilities in assessing the quality of AGIs. The model outperforms existing methods in terms of Spearman's Rank-Order Correlation Coefficient (SRCC) and Pearson's Linear Correlation Coefficient (PLCC), achieving significant improvements on both AGIQA-3k and AIGCQA-20k datasets. The results highlight the importance of incorporating semantic information into IQA models for accurate assessment of AGIs. The MA-AGIQA framework is designed to dynamically integrate fine-grained semantic features with quality-aware features, enabling effective handling of the varied quality aspects of AGIs. The model's performance is validated through extensive ablation studies, which confirm the effectiveness of each component in the framework. The study also emphasizes the potential of LMMs in enhancing the quality assessment of AI-generated images.