1 Mar 2018 | Robin C. Geyer, Tassilo Klein, Moin Nabi
This paper presents a client-level differential privacy preserving federated learning algorithm. Federated learning allows multiple clients to collaboratively train a model without sharing their data. However, this approach is vulnerable to differential attacks that can reveal information about individual clients' data. The authors propose a method to protect client-level privacy during federated learning by incorporating a differential privacy-preserving mechanism into the training process.
The algorithm works by randomly sampling a subset of clients for each communication round, and applying a Gaussian mechanism to the updates from these clients. This approach ensures that the model does not reveal whether a particular client participated in the training process. The algorithm dynamically adjusts the privacy-preserving mechanism during training to balance privacy loss and model performance.
The authors conducted experiments on a federated learning setup using the MNIST dataset. They tested their algorithm with different numbers of clients (100, 1000, 10000) and found that their approach maintains high model accuracy while achieving client-level differential privacy. The results show that the algorithm performs well even with a large number of clients, and that the privacy loss is minimal.
The paper also discusses the relationship between the number of clients, the privacy budget, and the model performance. It shows that as the number of clients increases, the model performance approaches that of non-differentially private models. The authors also highlight the importance of understanding the data and update distribution to optimize the privacy budget.
The algorithm is implemented using a randomized mechanism that randomly samples clients and applies a Gaussian mechanism to their updates. This approach ensures that the model does not reveal whether a particular client participated in the training process. The algorithm is evaluated on a federated learning setup using the MNIST dataset, and the results show that it maintains high model accuracy while achieving client-level differential privacy.This paper presents a client-level differential privacy preserving federated learning algorithm. Federated learning allows multiple clients to collaboratively train a model without sharing their data. However, this approach is vulnerable to differential attacks that can reveal information about individual clients' data. The authors propose a method to protect client-level privacy during federated learning by incorporating a differential privacy-preserving mechanism into the training process.
The algorithm works by randomly sampling a subset of clients for each communication round, and applying a Gaussian mechanism to the updates from these clients. This approach ensures that the model does not reveal whether a particular client participated in the training process. The algorithm dynamically adjusts the privacy-preserving mechanism during training to balance privacy loss and model performance.
The authors conducted experiments on a federated learning setup using the MNIST dataset. They tested their algorithm with different numbers of clients (100, 1000, 10000) and found that their approach maintains high model accuracy while achieving client-level differential privacy. The results show that the algorithm performs well even with a large number of clients, and that the privacy loss is minimal.
The paper also discusses the relationship between the number of clients, the privacy budget, and the model performance. It shows that as the number of clients increases, the model performance approaches that of non-differentially private models. The authors also highlight the importance of understanding the data and update distribution to optimize the privacy budget.
The algorithm is implemented using a randomized mechanism that randomly samples clients and applies a Gaussian mechanism to their updates. This approach ensures that the model does not reveal whether a particular client participated in the training process. The algorithm is evaluated on a federated learning setup using the MNIST dataset, and the results show that it maintains high model accuracy while achieving client-level differential privacy.