Ditto: Fair and Robust Federated Learning Through Personalization

Ditto: Fair and Robust Federated Learning Through Personalization

15 Jun 2021 | Tian Li, Shengyuan Hu, Ahmad Beirami, Virginia Smith
Ditto is a simple, scalable framework for personalized federated learning that inherently provides fairness and robustness benefits. The paper identifies that fairness and robustness are competing constraints in statistically heterogeneous federated learning systems. To address these, Ditto employs a multi-task learning approach that allows for personalized models while maintaining the efficiency and privacy of traditional federated learning. Theoretically, Ditto is shown to achieve fairness and robustness simultaneously on linear problems. Empirically, it outperforms recent personalization methods and state-of-the-art fair or robust baselines in terms of accuracy, robustness, and fairness across various federated datasets. Ditto is designed to handle the tension between fairness and robustness by learning personalized models that adapt to data heterogeneity. It is applicable to both convex and non-convex objectives and inherits the privacy and efficiency properties of traditional FL. The framework is evaluated on a suite of federated benchmarks, showing that it achieves better accuracy, robustness, and fairness than existing methods. The paper also demonstrates that Ditto can be combined with robust baselines to further improve performance. The paper highlights that Ditto is particularly useful in practical applications where multiple constraints (accuracy, fairness, and robustness) need to be satisfied simultaneously. It is shown that Ditto can outperform other personalized FL methods in terms of accuracy and fairness, and that it is robust to various types of attacks, including data and model poisoning. The framework is also shown to be effective in reducing variance in test accuracy across devices, leading to more fair models. The paper concludes that Ditto provides a simple and effective solution to the competing constraints of accuracy, fairness, and robustness in federated learning. It is a lightweight personalization add-on that maintains the privacy and communication efficiency of traditional FL while offering inherent benefits in terms of fairness and robustness. The work suggests future directions for exploring the applicability of Ditto to other attacks and understanding the fairness and robustness properties of other personalized methods.Ditto is a simple, scalable framework for personalized federated learning that inherently provides fairness and robustness benefits. The paper identifies that fairness and robustness are competing constraints in statistically heterogeneous federated learning systems. To address these, Ditto employs a multi-task learning approach that allows for personalized models while maintaining the efficiency and privacy of traditional federated learning. Theoretically, Ditto is shown to achieve fairness and robustness simultaneously on linear problems. Empirically, it outperforms recent personalization methods and state-of-the-art fair or robust baselines in terms of accuracy, robustness, and fairness across various federated datasets. Ditto is designed to handle the tension between fairness and robustness by learning personalized models that adapt to data heterogeneity. It is applicable to both convex and non-convex objectives and inherits the privacy and efficiency properties of traditional FL. The framework is evaluated on a suite of federated benchmarks, showing that it achieves better accuracy, robustness, and fairness than existing methods. The paper also demonstrates that Ditto can be combined with robust baselines to further improve performance. The paper highlights that Ditto is particularly useful in practical applications where multiple constraints (accuracy, fairness, and robustness) need to be satisfied simultaneously. It is shown that Ditto can outperform other personalized FL methods in terms of accuracy and fairness, and that it is robust to various types of attacks, including data and model poisoning. The framework is also shown to be effective in reducing variance in test accuracy across devices, leading to more fair models. The paper concludes that Ditto provides a simple and effective solution to the competing constraints of accuracy, fairness, and robustness in federated learning. It is a lightweight personalization add-on that maintains the privacy and communication efficiency of traditional FL while offering inherent benefits in terms of fairness and robustness. The work suggests future directions for exploring the applicability of Ditto to other attacks and understanding the fairness and robustness properties of other personalized methods.
Reach us at info@study.space
[slides and audio] Ditto%3A Fair and Robust Federated Learning Through Personalization