Privacy preservation for federated learning in health care

Privacy preservation for federated learning in health care

July 12, 2024 | Sarthak Pati, Sourav Kumar, Amokh Varma, Brandon Edwards, Charles Lu, Liangqiong Qu, Justin J. Wang, Anantharaman Lakshminarayanan, Shih-han Wang, Micah J. Sheller, Ken Chang, Praveer Singh, Daniel L. Rubin, Jayashree Kalpathy-Cramer, Spyridon Bakas
The article "Privacy Preservation for Federated Learning in Health Care" by Sarthak Pati et al. discusses the importance of privacy in healthcare and the challenges posed by federated learning (FL) in addressing these concerns. FL allows multiple healthcare institutions to collaboratively train AI models without sharing raw patient data, which is crucial for maintaining patient confidentiality and trust. However, FL introduces new privacy risks, such as data exfiltration, model inversion, membership inference, and model extraction. The authors review various mitigation techniques, including secure multi-party computation (SMPC), homomorphic encryption (HE), confidential computing (CC), differential privacy (DP), and privacy-aware model objectives. These techniques aim to protect against system-level threats, information extraction, and poisoning attacks. The article highlights the trade-offs between computational efficiency and privacy guarantees, emphasizing the need for healthcare researchers to carefully consider these trade-offs when designing FL systems. The goal is to provide a comprehensive guide for researchers to ensure secure and private collaborative AI in healthcare.The article "Privacy Preservation for Federated Learning in Health Care" by Sarthak Pati et al. discusses the importance of privacy in healthcare and the challenges posed by federated learning (FL) in addressing these concerns. FL allows multiple healthcare institutions to collaboratively train AI models without sharing raw patient data, which is crucial for maintaining patient confidentiality and trust. However, FL introduces new privacy risks, such as data exfiltration, model inversion, membership inference, and model extraction. The authors review various mitigation techniques, including secure multi-party computation (SMPC), homomorphic encryption (HE), confidential computing (CC), differential privacy (DP), and privacy-aware model objectives. These techniques aim to protect against system-level threats, information extraction, and poisoning attacks. The article highlights the trade-offs between computational efficiency and privacy guarantees, emphasizing the need for healthcare researchers to carefully consider these trade-offs when designing FL systems. The goal is to provide a comprehensive guide for researchers to ensure secure and private collaborative AI in healthcare.
Reach us at info@study.space
[slides] Privacy preservation for federated learning in health care | StudySpace