This paper explores the relationship between idempotence and perceptual image compression. Idempotence refers to the stability of an image codec to re-compression, while perceptual image compression aims to achieve high-quality image reconstruction with low bitrate. The authors show that conditional generative models used in perceptual image compression inherently satisfy idempotence. Conversely, unconditional generative models with idempotence constraints are equivalent to conditional generative models. Based on this equivalence, the authors propose a new perceptual image compression paradigm by inverting an unconditional generative model with idempotence constraints. This approach does not require training new models, only a pre-trained mean-square-error (MSE) codec and unconditional generative model. Empirically, the proposed method outperforms state-of-the-art methods like HiFiC and ILLM in terms of Fréchet Inception Distance (FID). The method is theoretically equivalent to conditional generative models and achieves state-of-the-art perceptual quality. The approach also satisfies the theoretical results of rate-distortion-perception trade-off. The method is implemented as a practical codec and can be applied to various image datasets. The results show that the proposed method achieves lower MSE and better perceptual quality compared to existing methods. The method is also efficient in terms of training and testing complexity. The authors conclude that idempotence and perceptual image compression are closely related, and the proposed method provides a new paradigm for perceptual image compression.This paper explores the relationship between idempotence and perceptual image compression. Idempotence refers to the stability of an image codec to re-compression, while perceptual image compression aims to achieve high-quality image reconstruction with low bitrate. The authors show that conditional generative models used in perceptual image compression inherently satisfy idempotence. Conversely, unconditional generative models with idempotence constraints are equivalent to conditional generative models. Based on this equivalence, the authors propose a new perceptual image compression paradigm by inverting an unconditional generative model with idempotence constraints. This approach does not require training new models, only a pre-trained mean-square-error (MSE) codec and unconditional generative model. Empirically, the proposed method outperforms state-of-the-art methods like HiFiC and ILLM in terms of Fréchet Inception Distance (FID). The method is theoretically equivalent to conditional generative models and achieves state-of-the-art perceptual quality. The approach also satisfies the theoretical results of rate-distortion-perception trade-off. The method is implemented as a practical codec and can be applied to various image datasets. The results show that the proposed method achieves lower MSE and better perceptual quality compared to existing methods. The method is also efficient in terms of training and testing complexity. The authors conclude that idempotence and perceptual image compression are closely related, and the proposed method provides a new paradigm for perceptual image compression.