Fair Federated Learning under Domain Skew with Local Consistency and Domain Diversity

Fair Federated Learning under Domain Skew with Local Consistency and Domain Diversity

26 May 2024 | Yuhang Chen, Wenke Huang, Mang Ye
This paper introduces FedHEAL, a novel framework for achieving performance fairness in federated learning (FL) under domain skew. FL enables collaborative model training while preserving data privacy, but domain skew—where client data comes from different domains with varying feature distributions—introduces two key fairness challenges: parameter update conflicts and model aggregation bias. Parameter update conflicts arise from inconsistent parameter importance and update directions among clients, potentially overwhelming important parameters with less significant ones. Model aggregation bias occurs when existing FL methods allocate unfair weights, neglecting domain diversity and leading to biased convergence objectives. FedHEAL addresses these issues by leveraging the observed parameter update consistency (PUC) in FL. It selectively discards unimportant parameter updates to prevent them from overwhelming important ones, ensuring fairer generalization performance. Additionally, it proposes a fair aggregation objective to prevent the global model from biasing towards certain domains, ensuring it aligns with an unbiased model. The method is generic and can be integrated with existing FL approaches to enhance fairness. Experiments on the Digits and Office-Caltech datasets demonstrate that FedHEAL achieves high fairness and performance. It outperforms existing methods in terms of average accuracy and standard deviation, showing improved fairness across domains. The method is computationally efficient and can be applied to scenarios where clients share the same network architecture. However, it may not perform well in cases where clients have different architectures and parameter update consistency cannot be assumed. Overall, FedHEAL provides a new approach to achieving performance fairness in FL under domain skew, offering fresh research directions and insights for the community.This paper introduces FedHEAL, a novel framework for achieving performance fairness in federated learning (FL) under domain skew. FL enables collaborative model training while preserving data privacy, but domain skew—where client data comes from different domains with varying feature distributions—introduces two key fairness challenges: parameter update conflicts and model aggregation bias. Parameter update conflicts arise from inconsistent parameter importance and update directions among clients, potentially overwhelming important parameters with less significant ones. Model aggregation bias occurs when existing FL methods allocate unfair weights, neglecting domain diversity and leading to biased convergence objectives. FedHEAL addresses these issues by leveraging the observed parameter update consistency (PUC) in FL. It selectively discards unimportant parameter updates to prevent them from overwhelming important ones, ensuring fairer generalization performance. Additionally, it proposes a fair aggregation objective to prevent the global model from biasing towards certain domains, ensuring it aligns with an unbiased model. The method is generic and can be integrated with existing FL approaches to enhance fairness. Experiments on the Digits and Office-Caltech datasets demonstrate that FedHEAL achieves high fairness and performance. It outperforms existing methods in terms of average accuracy and standard deviation, showing improved fairness across domains. The method is computationally efficient and can be applied to scenarios where clients share the same network architecture. However, it may not perform well in cases where clients have different architectures and parameter update consistency cannot be assumed. Overall, FedHEAL provides a new approach to achieving performance fairness in FL under domain skew, offering fresh research directions and insights for the community.
Reach us at info@study.space