This paper introduces FedAF, a novel aggregation-free federated learning (FL) algorithm designed to address data heterogeneity in distributed learning scenarios. Traditional FL methods rely on an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server, leading to client drift and suboptimal performance in data-heterogeneous settings. FedAF, in contrast, employs a collaborative data condensation approach, where clients learn condensed data representations and share them with the server. The server then trains the global model using these condensed data and soft labels, avoiding client drift and improving model performance.
FedAF enhances the quality of condensed data by aligning local knowledge distributions with global knowledge through a Sliced Wasserstein Distance (SWD)-based regularization. It also incorporates a local-global knowledge matching scheme, enabling the server to utilize soft labels extracted from client data, thereby refining the training process and improving convergence. Extensive experiments on benchmark datasets such as CIFAR10, CIFAR100, and Fashion-MNIST demonstrate that FedAF outperforms state-of-the-art FL algorithms in terms of accuracy and convergence speed, particularly under strong data heterogeneity. On CIFAR10, FedAF achieves up to 25.44% improvement in accuracy and 80% faster convergence compared to existing methods. Additionally, FedAF shows superior performance on feature-skew data heterogeneity, such as in the DomainNet dataset, where it achieves higher accuracy and faster convergence than other baselines.
The key contributions of this work include: (1) the proposal of FedAF, an aggregation-free FL algorithm that avoids client drift and improves global model performance; (2) the introduction of a collaborative data condensation scheme that enhances the quality of condensed data by leveraging global knowledge; and (3) the development of a local-global knowledge matching scheme that enables the server to utilize soft labels for improved global model training. These innovations collectively enhance the effectiveness of FL in handling data heterogeneity, offering a more robust and efficient solution for distributed learning scenarios.This paper introduces FedAF, a novel aggregation-free federated learning (FL) algorithm designed to address data heterogeneity in distributed learning scenarios. Traditional FL methods rely on an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server, leading to client drift and suboptimal performance in data-heterogeneous settings. FedAF, in contrast, employs a collaborative data condensation approach, where clients learn condensed data representations and share them with the server. The server then trains the global model using these condensed data and soft labels, avoiding client drift and improving model performance.
FedAF enhances the quality of condensed data by aligning local knowledge distributions with global knowledge through a Sliced Wasserstein Distance (SWD)-based regularization. It also incorporates a local-global knowledge matching scheme, enabling the server to utilize soft labels extracted from client data, thereby refining the training process and improving convergence. Extensive experiments on benchmark datasets such as CIFAR10, CIFAR100, and Fashion-MNIST demonstrate that FedAF outperforms state-of-the-art FL algorithms in terms of accuracy and convergence speed, particularly under strong data heterogeneity. On CIFAR10, FedAF achieves up to 25.44% improvement in accuracy and 80% faster convergence compared to existing methods. Additionally, FedAF shows superior performance on feature-skew data heterogeneity, such as in the DomainNet dataset, where it achieves higher accuracy and faster convergence than other baselines.
The key contributions of this work include: (1) the proposal of FedAF, an aggregation-free FL algorithm that avoids client drift and improves global model performance; (2) the introduction of a collaborative data condensation scheme that enhances the quality of condensed data by leveraging global knowledge; and (3) the development of a local-global knowledge matching scheme that enables the server to utilize soft labels for improved global model training. These innovations collectively enhance the effectiveness of FL in handling data heterogeneity, offering a more robust and efficient solution for distributed learning scenarios.