Privacy preservation for federated learning in health care

Privacy preservation for federated learning in health care

July 12, 2024 | Sarthak Pati, Sourav Kumar, Amokh Varma, Brandon Edwards, Charles Lu, Liangqiong Qu, Justin J. Wang, Anantharaman Lakshminarayanan, Shih-han Wang, Micah J. Sheller, Ken Chang, Praveer Singh, Daniel L. Rubin, Jayashree Kalpathy-Cramer, and Spyridon Bakas
This review discusses privacy preservation in federated learning (FL) for healthcare. FL allows multiple institutions to collaboratively train AI models without sharing raw patient data, which helps protect privacy. However, there are privacy risks associated with FL, such as information leakage through model updates and potential data breaches. The review highlights the need for privacy-preserving techniques in FL to ensure secure and private collaborative AI in healthcare. The review outlines the key privacy threats in FL, including data exfiltration, model exfiltration, information extraction, and poisoning attacks. It also discusses mitigation strategies, such as secure multi-party computation, homomorphic encryption, confidential computing, and differential privacy. These techniques aim to protect sensitive data and prevent unauthorized access or misuse. The review emphasizes the importance of addressing privacy concerns in FL for healthcare, where data is highly sensitive and regulated. It provides a comprehensive overview of the current state of privacy-preserving FL, including the challenges and potential solutions for ensuring secure and private collaborative AI in healthcare. The review aims to serve as a guide for healthcare researchers and practitioners seeking to implement FL while maintaining data privacy and security.This review discusses privacy preservation in federated learning (FL) for healthcare. FL allows multiple institutions to collaboratively train AI models without sharing raw patient data, which helps protect privacy. However, there are privacy risks associated with FL, such as information leakage through model updates and potential data breaches. The review highlights the need for privacy-preserving techniques in FL to ensure secure and private collaborative AI in healthcare. The review outlines the key privacy threats in FL, including data exfiltration, model exfiltration, information extraction, and poisoning attacks. It also discusses mitigation strategies, such as secure multi-party computation, homomorphic encryption, confidential computing, and differential privacy. These techniques aim to protect sensitive data and prevent unauthorized access or misuse. The review emphasizes the importance of addressing privacy concerns in FL for healthcare, where data is highly sensitive and regulated. It provides a comprehensive overview of the current state of privacy-preserving FL, including the challenges and potential solutions for ensuring secure and private collaborative AI in healthcare. The review aims to serve as a guide for healthcare researchers and practitioners seeking to implement FL while maintaining data privacy and security.
Reach us at info@study.space
[slides] Privacy preservation for federated learning in health care | StudySpace