Fair Federated Learning under Domain Skew with Local Consistency and Domain Diversity

Fair Federated Learning under Domain Skew with Local Consistency and Domain Diversity

26 May 2024 | Yuhang Chen1*, Wenke Huang1*, Mang Ye1,2†
The paper "Fair Federated Learning under Domain Skew with Local Consistency and Domain Diversity" addresses the challenges of performance fairness in federated learning (FL) under domain skew. Domain skew refers to the scenario where client data is sampled from multiple domains, leading to significant data disparities and performance differences among clients. The authors identify two primary fairness issues: Parameter Update Conflict and Model Aggregation Bias. 1. **Parameter Update Conflict**: This issue arises from varying parameter importance and inconsistent update directions among clients. Important parameters are potentially overwhelmed by unimportant parameters, leading to poor performance for lower-performing clients. 2. **Model Aggregation Bias**: Existing FL approaches often introduce unfair weight allocation and neglect domain diversity, resulting in biased model convergence and distinct performance among domains. To tackle these issues, the authors propose a novel framework called Federated Parameter-Harmonized and Aggregation-Equalized Learning (FedHEAL). FedHEAL leverages the discovered characteristic of parameter update consistency (PUC) to selectively discard unimportant parameter updates, ensuring fair generalization performance. Additionally, it introduces a fair aggregation objective to prevent global model bias towards certain domains, maintaining an unbiased model alignment. The proposed method is generic and can be integrated with existing FL methods to enhance fairness. Comprehensive experiments on the Digits and Office-Caltech datasets demonstrate the effectiveness of FedHEAL in achieving high fairness and performance across different domains. The method is shown to improve both average accuracy and reduce standard deviation, making it a promising solution for fair federated learning under domain skew.The paper "Fair Federated Learning under Domain Skew with Local Consistency and Domain Diversity" addresses the challenges of performance fairness in federated learning (FL) under domain skew. Domain skew refers to the scenario where client data is sampled from multiple domains, leading to significant data disparities and performance differences among clients. The authors identify two primary fairness issues: Parameter Update Conflict and Model Aggregation Bias. 1. **Parameter Update Conflict**: This issue arises from varying parameter importance and inconsistent update directions among clients. Important parameters are potentially overwhelmed by unimportant parameters, leading to poor performance for lower-performing clients. 2. **Model Aggregation Bias**: Existing FL approaches often introduce unfair weight allocation and neglect domain diversity, resulting in biased model convergence and distinct performance among domains. To tackle these issues, the authors propose a novel framework called Federated Parameter-Harmonized and Aggregation-Equalized Learning (FedHEAL). FedHEAL leverages the discovered characteristic of parameter update consistency (PUC) to selectively discard unimportant parameter updates, ensuring fair generalization performance. Additionally, it introduces a fair aggregation objective to prevent global model bias towards certain domains, maintaining an unbiased model alignment. The proposed method is generic and can be integrated with existing FL methods to enhance fairness. Comprehensive experiments on the Digits and Office-Caltech datasets demonstrate the effectiveness of FedHEAL in achieving high fairness and performance across different domains. The method is shown to improve both average accuracy and reduce standard deviation, making it a promising solution for fair federated learning under domain skew.
Reach us at info@study.space