2024-01-29 | Zihan Chen, Howard H. Yang, Tony Q.S. Quek, and Kai Fong Ernest Chong
Spectral Co-Distillation for Personalized Federated Learning proposes a novel framework that combines spectral distillation with co-distillation to enhance personalized federated learning (PFL). The framework leverages the Fourier spectrum of models to capture the (dis-)similarity between generic and personalized models, enabling more effective knowledge transfer. The method introduces a wait-free local training protocol to reduce communication overhead and improve training efficiency. The approach is validated through extensive experiments on multiple datasets, demonstrating superior performance in terms of model generalization and communication efficiency. The framework includes two main components: spectral distillation for personalized model training and co-distillation for generic model training. The wait-free protocol allows clients to utilize idle time during global communication for training, reducing overall runtime. The method outperforms existing PFL approaches in both generic and personalized model training, showing the effectiveness of using spectral information for knowledge distillation. The framework is designed to handle heterogeneous data distributions and supports dynamic client participation, making it suitable for real-world applications. The results indicate that the proposed method achieves better accuracy and efficiency compared to conventional FL and other PFL methods.Spectral Co-Distillation for Personalized Federated Learning proposes a novel framework that combines spectral distillation with co-distillation to enhance personalized federated learning (PFL). The framework leverages the Fourier spectrum of models to capture the (dis-)similarity between generic and personalized models, enabling more effective knowledge transfer. The method introduces a wait-free local training protocol to reduce communication overhead and improve training efficiency. The approach is validated through extensive experiments on multiple datasets, demonstrating superior performance in terms of model generalization and communication efficiency. The framework includes two main components: spectral distillation for personalized model training and co-distillation for generic model training. The wait-free protocol allows clients to utilize idle time during global communication for training, reducing overall runtime. The method outperforms existing PFL approaches in both generic and personalized model training, showing the effectiveness of using spectral information for knowledge distillation. The framework is designed to handle heterogeneous data distributions and supports dynamic client participation, making it suitable for real-world applications. The results indicate that the proposed method achieves better accuracy and efficiency compared to conventional FL and other PFL methods.