The paper introduces a novel learning paradigm called Dynamic Sparse Learning (DSL) for efficient recommendation systems. DSL aims to reduce both the training and inference costs while maintaining comparable recommendation performance. The key innovation of DSL is its dynamic adjustment of the sparsity distribution of model parameters during training, which involves pruning and growth strategies. Pruning eliminates weights with negligible impact, while growth reactivates important weights. This approach ensures a consistent and minimal parameter budget throughout the training process, achieving end-to-end efficiency. The paper presents extensive experiments on various recommendation models and benchmark datasets, demonstrating that DSL significantly reduces training and inference costs while maintaining or improving performance. The authors also provide detailed analyses and visualizations to support the effectiveness and rationality of their method.The paper introduces a novel learning paradigm called Dynamic Sparse Learning (DSL) for efficient recommendation systems. DSL aims to reduce both the training and inference costs while maintaining comparable recommendation performance. The key innovation of DSL is its dynamic adjustment of the sparsity distribution of model parameters during training, which involves pruning and growth strategies. Pruning eliminates weights with negligible impact, while growth reactivates important weights. This approach ensures a consistent and minimal parameter budget throughout the training process, achieving end-to-end efficiency. The paper presents extensive experiments on various recommendation models and benchmark datasets, demonstrating that DSL significantly reduces training and inference costs while maintaining or improving performance. The authors also provide detailed analyses and visualizations to support the effectiveness and rationality of their method.