This paper addresses the challenge of image compression, particularly focusing on improving the performance of learned compression algorithms. The authors propose a method that uses discretized Gaussian Mixture Likelihoods (GML) to parameterize the distributions of latent codes, enhancing the accuracy and flexibility of the entropy model. Additionally, they incorporate attention modules into the network architecture to improve coding efficiency by focusing on complex regions. Experimental results show that the proposed method achieves state-of-the-art performance compared to existing learned compression methods and traditional compression standards like HEVC, JPEG2000, and JPEG. Notably, the method achieves comparable PSNR performance to the latest compression standard, Versatile Video Coding (VVC), and produces visually more pleasant results when optimized by MS-SSIM. The project page is available at <https://github.com/ZhengxueCheng/Learned-Image-Compression-with-GMM-and-Attention>.This paper addresses the challenge of image compression, particularly focusing on improving the performance of learned compression algorithms. The authors propose a method that uses discretized Gaussian Mixture Likelihoods (GML) to parameterize the distributions of latent codes, enhancing the accuracy and flexibility of the entropy model. Additionally, they incorporate attention modules into the network architecture to improve coding efficiency by focusing on complex regions. Experimental results show that the proposed method achieves state-of-the-art performance compared to existing learned compression methods and traditional compression standards like HEVC, JPEG2000, and JPEG. Notably, the method achieves comparable PSNR performance to the latest compression standard, Versatile Video Coding (VVC), and produces visually more pleasant results when optimized by MS-SSIM. The project page is available at <https://github.com/ZhengxueCheng/Learned-Image-Compression-with-GMM-and-Attention>.