Federated Learning with Bilateral Curation for Partially Class-Disjoint Data

Federated Learning with Bilateral Curation for Partially Class-Disjoint Data

29 May 2024 | Ziqing Fan1,2, Ruipeng Zhang1,2, Jiangchao Yao1,2, Bo Han1, Ya Zhang1,2, Yanfeng Wang1,2,
The paper addresses the challenge of partially class-disjoint data (PCDD) in federated learning, where each client contributes a subset of classes instead of all classes. This issue leads to the angle collapse problem for missing classes and space waste for existing classes, affecting both global and local objectives. To tackle this, the authors propose FedGELA, a novel approach that globally fixes the classifier as a simplex Equiangular Tight Frame (ETF) and locally adapts it to personal distributions. ETF ensures fair discrimination for all classes globally, while local adaptation leverages the wasted space for existing classes. The method is theoretically analyzed for convergence guarantees and experimentally validated on various datasets, demonstrating significant improvements over state-of-the-art methods in both personalized and generic tasks. FedGELA achieves an average improvement of 3.9% over FedAvg and 1.5% over the best baselines, with both local and global convergence guarantees. The source code is available at <https://github.com/MediaBrain-SJTU/FedGELA>.The paper addresses the challenge of partially class-disjoint data (PCDD) in federated learning, where each client contributes a subset of classes instead of all classes. This issue leads to the angle collapse problem for missing classes and space waste for existing classes, affecting both global and local objectives. To tackle this, the authors propose FedGELA, a novel approach that globally fixes the classifier as a simplex Equiangular Tight Frame (ETF) and locally adapts it to personal distributions. ETF ensures fair discrimination for all classes globally, while local adaptation leverages the wasted space for existing classes. The method is theoretically analyzed for convergence guarantees and experimentally validated on various datasets, demonstrating significant improvements over state-of-the-art methods in both personalized and generic tasks. FedGELA achieves an average improvement of 3.9% over FedAvg and 1.5% over the best baselines, with both local and global convergence guarantees. The source code is available at <https://github.com/MediaBrain-SJTU/FedGELA>.
Reach us at info@study.space
Understanding Federated Learning with Bilateral Curation for Partially Class-Disjoint Data