13 May 2024 | Quankai Gao¹,², Qiangeng Xu², Zhe Cao², Ben Mildenhall², Wenchao Ma³, Le Chen⁴, Danhang Tang², and Ulrich Neumann¹
GaussianFlow is a method for 4D content creation using splatting Gaussian dynamics. The method introduces Gaussian flow, which connects 3D Gaussian dynamics with pixel velocities between consecutive frames. By splatting 3D Gaussian dynamics into the image space, Gaussian flow can be efficiently obtained and directly supervised by optical flow. This approach benefits tasks such as 4D generation and 4D novel view synthesis, especially for contents with rich motions that are challenging for existing methods. GaussianFlow resolves the common color drifting issue in 4D generation by improving Gaussian dynamics. The method achieves state-of-the-art results on both 4D generation and 4D novel view synthesis through quantitative and qualitative evaluations. GaussianFlow uses a 4D Gaussian field, which is optimized with photometric loss, SDS loss, and flow supervision. The method is implemented with CUDA and is efficient and end-to-end differentiable. GaussianFlow is tested on the Consistent4D dataset and the Plenoptic Video Datasets, achieving superior performance in terms of visual quality and motion consistency. The method also shows better performance on dynamic regions with high optical flow values. The method is compared with other methods in terms of PSNR and is shown to be more effective in handling dynamic scenes. The method is also validated through ablation studies, showing that flow supervision is effective in improving motion consistency and reducing ambiguities in photometric supervision. The method is able to generate high-quality 4D content with smooth and natural motions, even in highly dynamic regions. The method is also able to handle glossy objects with specular highlights, which are challenging for most methods. The method is able to reduce color drifting and improve the consistency of texture and geometry in generated content. The method is able to generate high-quality 4D content with smooth and natural motions, even in highly dynamic regions. The method is able to handle glossy objects with specular highlights, which are challenging for most methods. The method is able to reduce color drifting and improve the consistency of texture and geometry in generated content.GaussianFlow is a method for 4D content creation using splatting Gaussian dynamics. The method introduces Gaussian flow, which connects 3D Gaussian dynamics with pixel velocities between consecutive frames. By splatting 3D Gaussian dynamics into the image space, Gaussian flow can be efficiently obtained and directly supervised by optical flow. This approach benefits tasks such as 4D generation and 4D novel view synthesis, especially for contents with rich motions that are challenging for existing methods. GaussianFlow resolves the common color drifting issue in 4D generation by improving Gaussian dynamics. The method achieves state-of-the-art results on both 4D generation and 4D novel view synthesis through quantitative and qualitative evaluations. GaussianFlow uses a 4D Gaussian field, which is optimized with photometric loss, SDS loss, and flow supervision. The method is implemented with CUDA and is efficient and end-to-end differentiable. GaussianFlow is tested on the Consistent4D dataset and the Plenoptic Video Datasets, achieving superior performance in terms of visual quality and motion consistency. The method also shows better performance on dynamic regions with high optical flow values. The method is compared with other methods in terms of PSNR and is shown to be more effective in handling dynamic scenes. The method is also validated through ablation studies, showing that flow supervision is effective in improving motion consistency and reducing ambiguities in photometric supervision. The method is able to generate high-quality 4D content with smooth and natural motions, even in highly dynamic regions. The method is also able to handle glossy objects with specular highlights, which are challenging for most methods. The method is able to reduce color drifting and improve the consistency of texture and geometry in generated content. The method is able to generate high-quality 4D content with smooth and natural motions, even in highly dynamic regions. The method is able to handle glossy objects with specular highlights, which are challenging for most methods. The method is able to reduce color drifting and improve the consistency of texture and geometry in generated content.