2024 | Rong Dai, Yonggang Zhang, Ang Li, Tongliang Liu, Xun Yang, Bo Han
This paper proposes Co-Boosting, a novel one-shot federated learning (OFL) framework that enhances both synthesized data and the ensemble model through mutual improvement. In OFL, the server model is trained using knowledge distillation from client models, which also generate synthetic data for distillation. However, the performance of the server model is closely tied to the quality of the synthesized data and the ensemble model. Existing methods often focus on improving either the data or the ensemble separately, but they fail to consider their interdependence.
Co-Boosting introduces a framework where synthesized data and the ensemble model enhance each other iteratively. Specifically, the current ensemble model is used to generate high-quality samples adversarially. These samples are then used to adjust the ensembling weights of the client models, improving the ensemble. The server model is then updated by distilling knowledge from both the enriched data and the refined ensemble. This mutual enhancement process leads to the natural improvement of the server model.
Extensive experiments on multiple benchmark datasets demonstrate that Co-Boosting significantly outperforms existing baselines. It eliminates the need for adjustments to the client's local training, requires no additional data or model transmission, and allows for heterogeneous client models. The method is particularly effective in contemporary model market scenarios where clients provide pre-trained models. Co-Boosting achieves state-of-the-art performance by improving both the quality of the synthesized data and the ensemble model, leading to a more accurate and robust server model. The framework is practical, efficient, and adaptable to various settings, making it a promising solution for one-shot federated learning.This paper proposes Co-Boosting, a novel one-shot federated learning (OFL) framework that enhances both synthesized data and the ensemble model through mutual improvement. In OFL, the server model is trained using knowledge distillation from client models, which also generate synthetic data for distillation. However, the performance of the server model is closely tied to the quality of the synthesized data and the ensemble model. Existing methods often focus on improving either the data or the ensemble separately, but they fail to consider their interdependence.
Co-Boosting introduces a framework where synthesized data and the ensemble model enhance each other iteratively. Specifically, the current ensemble model is used to generate high-quality samples adversarially. These samples are then used to adjust the ensembling weights of the client models, improving the ensemble. The server model is then updated by distilling knowledge from both the enriched data and the refined ensemble. This mutual enhancement process leads to the natural improvement of the server model.
Extensive experiments on multiple benchmark datasets demonstrate that Co-Boosting significantly outperforms existing baselines. It eliminates the need for adjustments to the client's local training, requires no additional data or model transmission, and allows for heterogeneous client models. The method is particularly effective in contemporary model market scenarios where clients provide pre-trained models. Co-Boosting achieves state-of-the-art performance by improving both the quality of the synthesized data and the ensemble model, leading to a more accurate and robust server model. The framework is practical, efficient, and adaptable to various settings, making it a promising solution for one-shot federated learning.