LaRa is a feed-forward 2D Gaussian Splatting model designed to reconstruct radiance fields from large-baseline views, a single image, or a text prompt. The method unifies local and global reasoning in transformer layers, improving quality and convergence speed. It represents scenes as Gaussian Volumes, combines an image encoder with Group Attention Layers for efficient feed-forward reconstruction. Experimental results show that LaRa, trained for two days on four GPUs, achieves high-fidelity 360° radiance field reconstruction and robustness to zero-shot and out-of-domain testing. The model is efficient, requiring only four input images to provide photorealistic, 360° novel view synthesis. It also enables high-quality mesh reconstruction using off-the-shelf depth-map fusion algorithms.LaRa is a feed-forward 2D Gaussian Splatting model designed to reconstruct radiance fields from large-baseline views, a single image, or a text prompt. The method unifies local and global reasoning in transformer layers, improving quality and convergence speed. It represents scenes as Gaussian Volumes, combines an image encoder with Group Attention Layers for efficient feed-forward reconstruction. Experimental results show that LaRa, trained for two days on four GPUs, achieves high-fidelity 360° radiance field reconstruction and robustness to zero-shot and out-of-domain testing. The model is efficient, requiring only four input images to provide photorealistic, 360° novel view synthesis. It also enables high-quality mesh reconstruction using off-the-shelf depth-map fusion algorithms.