FiT: Flexible Vision Transformer for Diffusion Model

FiT: Flexible Vision Transformer for Diffusion Model

19 Feb 2024 | Zeyu Lu 1 2*, Zidong Wang 1 3*, Di Huang 1 4, Chengyue Wu 5, Xihui Liu 5, Wanli Ouyang 1, Lei Bai 1
The Flexible Vision Transformer (FiT) is a novel architecture designed to generate images at unrestricted resolutions and aspect ratios. Unlike traditional methods that treat images as static-resolution grids, FiT conceptualizes images as sequences of dynamically-sized tokens, enabling flexible training and inference processes. This approach allows FiT to adapt to diverse aspect ratios without cropping or resizing images, maintaining the integrity of the original resolution. Key contributions include: 1. **Flexible Training Pipeline**: FiT preserves the original image aspect ratio during training by viewing images as sequences of tokens, allowing for adaptive resizing of high-resolution images to fit within a predefined maximum token limit. 2. **Network Architecture**: FiT incorporates 2D Rotary Positional Embedding (2D RoPE) and Swish-Gated Linear Unit (SwiGLU) to enhance performance and manage padding tokens efficiently. 3. **Training-Free Resolution Extrapolation**: FiT employs training-free extrapolation techniques, such as NTK-aware interpolation and YaRN, to improve resolution extrapolation capabilities. Experiments demonstrate that FiT outperforms state-of-the-art models in generating images at various resolutions and aspect ratios, both within and beyond the training distribution. The highest-resolution model, FiT-XL/2, achieves superior performance across multiple resolutions, showcasing its effectiveness and flexibility in image generation.The Flexible Vision Transformer (FiT) is a novel architecture designed to generate images at unrestricted resolutions and aspect ratios. Unlike traditional methods that treat images as static-resolution grids, FiT conceptualizes images as sequences of dynamically-sized tokens, enabling flexible training and inference processes. This approach allows FiT to adapt to diverse aspect ratios without cropping or resizing images, maintaining the integrity of the original resolution. Key contributions include: 1. **Flexible Training Pipeline**: FiT preserves the original image aspect ratio during training by viewing images as sequences of tokens, allowing for adaptive resizing of high-resolution images to fit within a predefined maximum token limit. 2. **Network Architecture**: FiT incorporates 2D Rotary Positional Embedding (2D RoPE) and Swish-Gated Linear Unit (SwiGLU) to enhance performance and manage padding tokens efficiently. 3. **Training-Free Resolution Extrapolation**: FiT employs training-free extrapolation techniques, such as NTK-aware interpolation and YaRN, to improve resolution extrapolation capabilities. Experiments demonstrate that FiT outperforms state-of-the-art models in generating images at various resolutions and aspect ratios, both within and beyond the training distribution. The highest-resolution model, FiT-XL/2, achieves superior performance across multiple resolutions, showcasing its effectiveness and flexibility in image generation.
Reach us at info@study.space
[slides] FiT%3A Flexible Vision Transformer for Diffusion Model | StudySpace