Towards Efficient Replay in Federated Incremental Learning

Towards Efficient Replay in Federated Incremental Learning

3 Jun 2024 | Yichen Li, Qunwei Li, Haozhao Wang, Ruixuan Li*, Wenliang Zhong, Guannan Zhang
This paper proposes Re-Fed, a simple and efficient framework for federated incremental learning (FIL) to address catastrophic forgetting in scenarios with data heterogeneity. In FIL, clients learn from incremental tasks while preserving data privacy. However, clients often lack sufficient storage to retain all previous data, leading to catastrophic forgetting. Re-Fed enables clients to cache important samples for replay, allowing them to retain knowledge from previous tasks while learning new ones. The key idea of Re-Fed is to identify the importance of samples and coordinate clients to cache important previous samples with limited local storage when a new task arrives. Each client trains a personalized informative model (PIM) on previous local samples, which incorporates knowledge from both the global and local models. The importance of samples is determined by calculating the gradient norm during the training of the PIM. Clients then cache samples with higher importance scores and train their local models using both the cached samples and the new task. Theoretical analysis shows that Re-Fed can efficiently discover important samples for data replay, ensuring convergence. Empirical results demonstrate that Re-Fed achieves competitive performance compared to state-of-the-art methods, with up to 19.73% improvement in final accuracy on different tasks. Re-Fed is designed to be a lightweight personalization add-on for standard FIL, inheriting privacy protection and efficiency properties from traditional FL applications. The framework is evaluated on various datasets and scenarios, including class-incremental and domain-incremental learning. Results show that Re-Fed outperforms existing methods in terms of test accuracy and communication efficiency. The method is also sensitive to few parameters and robust to most parameters in a large range. The paper concludes that Re-Fed is a promising solution for addressing catastrophic forgetting in federated incremental learning with data heterogeneity. Future work aims to further explore the dynamic requirements of edge clients in practical FL systems.This paper proposes Re-Fed, a simple and efficient framework for federated incremental learning (FIL) to address catastrophic forgetting in scenarios with data heterogeneity. In FIL, clients learn from incremental tasks while preserving data privacy. However, clients often lack sufficient storage to retain all previous data, leading to catastrophic forgetting. Re-Fed enables clients to cache important samples for replay, allowing them to retain knowledge from previous tasks while learning new ones. The key idea of Re-Fed is to identify the importance of samples and coordinate clients to cache important previous samples with limited local storage when a new task arrives. Each client trains a personalized informative model (PIM) on previous local samples, which incorporates knowledge from both the global and local models. The importance of samples is determined by calculating the gradient norm during the training of the PIM. Clients then cache samples with higher importance scores and train their local models using both the cached samples and the new task. Theoretical analysis shows that Re-Fed can efficiently discover important samples for data replay, ensuring convergence. Empirical results demonstrate that Re-Fed achieves competitive performance compared to state-of-the-art methods, with up to 19.73% improvement in final accuracy on different tasks. Re-Fed is designed to be a lightweight personalization add-on for standard FIL, inheriting privacy protection and efficiency properties from traditional FL applications. The framework is evaluated on various datasets and scenarios, including class-incremental and domain-incremental learning. Results show that Re-Fed outperforms existing methods in terms of test accuracy and communication efficiency. The method is also sensitive to few parameters and robust to most parameters in a large range. The paper concludes that Re-Fed is a promising solution for addressing catastrophic forgetting in federated incremental learning with data heterogeneity. Future work aims to further explore the dynamic requirements of edge clients in practical FL systems.
Reach us at info@study.space