21 Apr 2021 | Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, Daniel Cohen-Or
The paper introduces a novel image-to-image translation framework called pixel2style2pixel (pSp), which leverages a StyleGAN encoder to directly map real images into the extended latent space $\mathcal{W}+$ without additional optimization. This approach allows for efficient and accurate embedding of real images into the latent domain, enabling various image-to-image translation tasks. pSp deviates from the standard "invert first, edit later" methodology, making it capable of handling tasks where the input image does not reside in the StyleGAN domain. The framework is demonstrated to be effective in multiple applications, including StyleGAN inversion, multi-modal conditional image synthesis, facial frontalization, inpainting, and super-resolution. pSp supports multi-modal synthesis by resampling styles, and its performance is compared favorably to state-of-the-art methods, even in challenging cases. The code for pSp is available at <https://github.com/eladrich/pixel2style2pixel>.The paper introduces a novel image-to-image translation framework called pixel2style2pixel (pSp), which leverages a StyleGAN encoder to directly map real images into the extended latent space $\mathcal{W}+$ without additional optimization. This approach allows for efficient and accurate embedding of real images into the latent domain, enabling various image-to-image translation tasks. pSp deviates from the standard "invert first, edit later" methodology, making it capable of handling tasks where the input image does not reside in the StyleGAN domain. The framework is demonstrated to be effective in multiple applications, including StyleGAN inversion, multi-modal conditional image synthesis, facial frontalization, inpainting, and super-resolution. pSp supports multi-modal synthesis by resampling styles, and its performance is compared favorably to state-of-the-art methods, even in challenging cases. The code for pSp is available at <https://github.com/eladrich/pixel2style2pixel>.