This paper presents a method to enhance blind video quality assessment (BVQA) models for social media videos by incorporating rich quality-aware features. The authors leverage pre-trained features from various computer vision models, specifically from blind image quality assessment (BIQA) and BVQA models, to improve the model's ability to handle complex distortions and diverse content of social media videos. The proposed model, based on the SimpleVQA framework, uses a Swin Transformer-B for spatial feature extraction and a SlowFast network for temporal feature extraction. Additionally, it integrates features from LIQE, Q-Align, and FAST-VQA to capture frame-level, scene-specific, and spatiotemporal quality-aware features. These features are concatenated and regressed into video quality scores using a multi-layer perceptron (MLP) network. Experimental results show that the proposed model achieves the best performance on three public social media VQA datasets and wins first place in the CVPR NTIRE 2024 Short-form UGC Video Quality Assessment Challenge. The core contributions include enhancing the SimpleVQA framework with rich quality-aware features and using a multi-head self-attention module to capture salient frame regions.This paper presents a method to enhance blind video quality assessment (BVQA) models for social media videos by incorporating rich quality-aware features. The authors leverage pre-trained features from various computer vision models, specifically from blind image quality assessment (BIQA) and BVQA models, to improve the model's ability to handle complex distortions and diverse content of social media videos. The proposed model, based on the SimpleVQA framework, uses a Swin Transformer-B for spatial feature extraction and a SlowFast network for temporal feature extraction. Additionally, it integrates features from LIQE, Q-Align, and FAST-VQA to capture frame-level, scene-specific, and spatiotemporal quality-aware features. These features are concatenated and regressed into video quality scores using a multi-layer perceptron (MLP) network. Experimental results show that the proposed model achieves the best performance on three public social media VQA datasets and wins first place in the CVPR NTIRE 2024 Short-form UGC Video Quality Assessment Challenge. The core contributions include enhancing the SimpleVQA framework with rich quality-aware features and using a multi-head self-attention module to capture salient frame regions.