CV-VAE: A Compatible Video VAE for Latent Generative Video Models

CV-VAE: A Compatible Video VAE for Latent Generative Video Models

23 Oct 2024 | Sijie Zhao, Yong Zhang, Xiaodong Cun, Shaoshu Yang, Muyao Niu, Xiaoyu Li, Wenbo Hu, Ying Shan
The paper "CV-VAE: A Compatible Video VAE for Latent Generative Video Models" addresses the challenge of spatio-temporal compression in video generation, particularly in the context of variational autoencoders (VAEs). The authors propose a novel method called CV-VAE, which aims to train a video VAE that is compatible with existing image and video models, such as those trained with the Stable Diffusion (SD) image VAE. The key contributions of CV-VAE include: 1. **Latent Space Compatibility**: CV-VAE ensures that the latent space of the video VAE is compatible with the latent space of the image VAE, allowing seamless integration and training with existing models. 2. **Latent Space Regularization**: A regularization loss is introduced to align the latent distributions of the image and video VAEs, preventing distribution shifts and ensuring compatibility. 3. **Efficient Architecture**: The architecture of the video VAE is designed to be efficient, using a combination of 2D and 3D convolutions to handle both spatial and temporal compression. 4. **Performance and Evaluation**: Extensive experiments demonstrate that CV-VAE achieves high-quality image and video reconstruction, with a 4x frame compression rate in the temporal dimension. It also shows compatibility with existing diffusion models, enabling the generation of smoother and longer videos with minimal fine-tuning. The paper highlights the importance of this work in advancing the field of video generation by providing a robust and efficient solution for training latent generative video models.The paper "CV-VAE: A Compatible Video VAE for Latent Generative Video Models" addresses the challenge of spatio-temporal compression in video generation, particularly in the context of variational autoencoders (VAEs). The authors propose a novel method called CV-VAE, which aims to train a video VAE that is compatible with existing image and video models, such as those trained with the Stable Diffusion (SD) image VAE. The key contributions of CV-VAE include: 1. **Latent Space Compatibility**: CV-VAE ensures that the latent space of the video VAE is compatible with the latent space of the image VAE, allowing seamless integration and training with existing models. 2. **Latent Space Regularization**: A regularization loss is introduced to align the latent distributions of the image and video VAEs, preventing distribution shifts and ensuring compatibility. 3. **Efficient Architecture**: The architecture of the video VAE is designed to be efficient, using a combination of 2D and 3D convolutions to handle both spatial and temporal compression. 4. **Performance and Evaluation**: Extensive experiments demonstrate that CV-VAE achieves high-quality image and video reconstruction, with a 4x frame compression rate in the temporal dimension. It also shows compatibility with existing diffusion models, enabling the generation of smoother and longer videos with minimal fine-tuning. The paper highlights the importance of this work in advancing the field of video generation by providing a robust and efficient solution for training latent generative video models.
Reach us at info@study.space
[slides] CV-VAE%3A A Compatible Video VAE for Latent Generative Video Models | StudySpace