Model-Contrastive Federated Learning

Model-Contrastive Federated Learning

30 Mar 2021 | Qinbin Li, Bingsheng He, Dawn Song
The paper introduces MOON (Model-Contrastive Federated Learning), a novel framework for federated learning that addresses the challenge of non-iid (independent and identically distributed) data distributions. MOON leverages contrastive learning at the model level to correct the local training of individual parties, aiming to align the representations learned by different models. Unlike traditional contrastive learning, which focuses on comparing representations of different images, MOON compares representations learned by different models. The key idea is to maximize the agreement between the representations learned by the current local model and the global model, thereby reducing the drift in local updates and improving the overall performance. Extensive experiments on various image classification datasets (CIFAR-10, CIFAR-100, and Tiny-Imagenet) demonstrate that MOON significantly outperforms state-of-the-art federated learning algorithms, achieving at least 2% higher accuracy in most cases. MOON is also shown to be communication-efficient and scalable, with reduced communication rounds and improved accuracy even with a larger number of parties. The effectiveness of MOON is further validated through experiments on data heterogeneity and loss function comparisons.The paper introduces MOON (Model-Contrastive Federated Learning), a novel framework for federated learning that addresses the challenge of non-iid (independent and identically distributed) data distributions. MOON leverages contrastive learning at the model level to correct the local training of individual parties, aiming to align the representations learned by different models. Unlike traditional contrastive learning, which focuses on comparing representations of different images, MOON compares representations learned by different models. The key idea is to maximize the agreement between the representations learned by the current local model and the global model, thereby reducing the drift in local updates and improving the overall performance. Extensive experiments on various image classification datasets (CIFAR-10, CIFAR-100, and Tiny-Imagenet) demonstrate that MOON significantly outperforms state-of-the-art federated learning algorithms, achieving at least 2% higher accuracy in most cases. MOON is also shown to be communication-efficient and scalable, with reduced communication rounds and improved accuracy even with a larger number of parties. The effectiveness of MOON is further validated through experiments on data heterogeneity and loss function comparisons.
Reach us at info@study.space
[slides] Model-Contrastive Federated Learning | StudySpace