The paper introduces the Semantic-Aware Discriminator (SeD) for Image Super-Resolution (SR), addressing the issue of coarse-grained distribution learning in traditional discriminators, which often leads to unrealistic textures. SeD incorporates semantics extracted from pretrained vision models (PVMs) into the discriminator, enabling it to distinguish real-fake images more adaptively and finely. This approach enhances the SR network's ability to generate more photo-realistic and pleasing textures. The paper details the design of SeD, including the semantic feature extractor and semantic-aware fusion block (SeFB), and demonstrates its effectiveness through extensive experiments on classical and real-world SR tasks. The results show that SeD outperforms existing methods in both objective and subjective quality metrics, validating its potential for improving SR performance.The paper introduces the Semantic-Aware Discriminator (SeD) for Image Super-Resolution (SR), addressing the issue of coarse-grained distribution learning in traditional discriminators, which often leads to unrealistic textures. SeD incorporates semantics extracted from pretrained vision models (PVMs) into the discriminator, enabling it to distinguish real-fake images more adaptively and finely. This approach enhances the SR network's ability to generate more photo-realistic and pleasing textures. The paper details the design of SeD, including the semantic feature extractor and semantic-aware fusion block (SeFB), and demonstrates its effectiveness through extensive experiments on classical and real-world SR tasks. The results show that SeD outperforms existing methods in both objective and subjective quality metrics, validating its potential for improving SR performance.