GALA is a framework that transforms a single-layer 3D human mesh into animatable layered assets, enabling the creation of novel clothed human avatars with any pose. The approach decomposes the input mesh into separate layers of geometry and texture, addressing the challenge of synthesizing plausible geometry and appearance for occluded regions. By leveraging a pretrained 2D diffusion model as a prior, GALA uses a pose-guided Score Distillation Sampling (SDS) loss to inpaint missing geometry and texture in both posed and canonical spaces. This ensures that the decomposed assets can be composed and reposed into novel identities and poses without artifacts. Experiments demonstrate the effectiveness of GALA in decomposition, canonicalization, and composition tasks, outperforming existing methods. The framework provides a practical pipeline for creating reusable and animatable layered assets, facilitating virtual try-on and avatar customization.GALA is a framework that transforms a single-layer 3D human mesh into animatable layered assets, enabling the creation of novel clothed human avatars with any pose. The approach decomposes the input mesh into separate layers of geometry and texture, addressing the challenge of synthesizing plausible geometry and appearance for occluded regions. By leveraging a pretrained 2D diffusion model as a prior, GALA uses a pose-guided Score Distillation Sampling (SDS) loss to inpaint missing geometry and texture in both posed and canonical spaces. This ensures that the decomposed assets can be composed and reposed into novel identities and poses without artifacts. Experiments demonstrate the effectiveness of GALA in decomposition, canonicalization, and composition tasks, outperforming existing methods. The framework provides a practical pipeline for creating reusable and animatable layered assets, facilitating virtual try-on and avatar customization.