Towards Efficient Replay in Federated Incremental Learning

Towards Efficient Replay in Federated Incremental Learning

3 Jun 2024 | Yichen Li, Qunwei Li, Haozhao Wang, Ruixuan Li, Wenliang Zhong, Guannan Zhang
The paper "Towards Efficient Replay in Federated Incremental Learning" addresses the challenge of catastrophic forgetting in Federated Incremental Learning (FIL) scenarios where clients may lack sufficient storage to retain full data. The authors propose Re-Fed, a simple and generic framework that enables clients to cache important samples for replay, thereby alleviating catastrophic forgetting. Re-Fed coordinates clients to cache samples based on their global and local importance, using a personalized informative model (PIM) to incorporate both local and global knowledge. The PIM is updated with the gradient norms of previous local samples, which are used to calculate sample importance scores. Clients then cache samples with higher importance scores and train their local models with both cached samples and new task samples. The paper provides theoretical analysis to show that Re-Fed can efficiently discover important samples for replay and empirically demonstrates its effectiveness through experiments on various datasets and task types. The results show that Re-Fed outperforms state-of-the-art methods by up to 19.73% in terms of final accuracy. The contributions of the paper include the introduction of Re-Fed, a novel framework for addressing catastrophic forgetting in FIL, and extensive experimental validation of its performance.The paper "Towards Efficient Replay in Federated Incremental Learning" addresses the challenge of catastrophic forgetting in Federated Incremental Learning (FIL) scenarios where clients may lack sufficient storage to retain full data. The authors propose Re-Fed, a simple and generic framework that enables clients to cache important samples for replay, thereby alleviating catastrophic forgetting. Re-Fed coordinates clients to cache samples based on their global and local importance, using a personalized informative model (PIM) to incorporate both local and global knowledge. The PIM is updated with the gradient norms of previous local samples, which are used to calculate sample importance scores. Clients then cache samples with higher importance scores and train their local models with both cached samples and new task samples. The paper provides theoretical analysis to show that Re-Fed can efficiently discover important samples for replay and empirically demonstrates its effectiveness through experiments on various datasets and task types. The results show that Re-Fed outperforms state-of-the-art methods by up to 19.73% in terms of final accuracy. The contributions of the paper include the introduction of Re-Fed, a novel framework for addressing catastrophic forgetting in FIL, and extensive experimental validation of its performance.
Reach us at info@study.space