April 8-12, 2024, Avila, Spain | Eugene Frimpong, Khoa Nguyen, Mindaugas Budzys, Tanveer Khan, Antonis Michalas
GuardML is a privacy-preserving machine learning (PPML) system that leverages hybrid homomorphic encryption (HHE) to enable secure and efficient ML services on constrained end devices. The system addresses the privacy concerns associated with traditional ML applications by ensuring that data and model privacy are preserved during inference. HHE combines symmetric encryption with homomorphic encryption to reduce computational and communication overhead, making it suitable for real-world applications. The authors introduce two protocols, 2GML and 3GML, which are designed for different use cases. 2GML is suitable for scenarios where the cloud service provider (CSP) owns the ML model, while 3GML is ideal for scenarios where the analyst owns the model and does not wish to reveal it to the CSP.
The 3GML protocol was evaluated using a sensitive medical dataset (ECG data) to classify heart disease. The results showed that the accuracy of the encrypted inference was nearly comparable to the plaintext inference, with only a slight reduction in accuracy. The computational and communication costs were significantly lower compared to traditional homomorphic encryption (HE) schemes, making HHE a viable solution for PPML applications. The experiments demonstrated that the CSP bears most of the computational burden, while the client and analyst experience minimal costs. The communication costs were also reduced due to the use of HHE, which allows for smaller ciphertext sizes. The results indicate that HHE is a promising approach for enabling secure and efficient ML services on constrained devices.GuardML is a privacy-preserving machine learning (PPML) system that leverages hybrid homomorphic encryption (HHE) to enable secure and efficient ML services on constrained end devices. The system addresses the privacy concerns associated with traditional ML applications by ensuring that data and model privacy are preserved during inference. HHE combines symmetric encryption with homomorphic encryption to reduce computational and communication overhead, making it suitable for real-world applications. The authors introduce two protocols, 2GML and 3GML, which are designed for different use cases. 2GML is suitable for scenarios where the cloud service provider (CSP) owns the ML model, while 3GML is ideal for scenarios where the analyst owns the model and does not wish to reveal it to the CSP.
The 3GML protocol was evaluated using a sensitive medical dataset (ECG data) to classify heart disease. The results showed that the accuracy of the encrypted inference was nearly comparable to the plaintext inference, with only a slight reduction in accuracy. The computational and communication costs were significantly lower compared to traditional homomorphic encryption (HE) schemes, making HHE a viable solution for PPML applications. The experiments demonstrated that the CSP bears most of the computational burden, while the client and analyst experience minimal costs. The communication costs were also reduced due to the use of HHE, which allows for smaller ciphertext sizes. The results indicate that HHE is a promising approach for enabling secure and efficient ML services on constrained devices.