22 May 2024 | Tobin South*, Alexander Camuto, Shrey Jain, Shayla Nguyen, Robert Mahari, Christian Paquin, Jason Morton, Alex 'Sandy' Pentland
This paper presents a method for verifiable evaluation of machine learning (ML) models using zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs). The authors address the challenge of verifying the performance claims of closed-source ML models, which are increasingly common in commercial applications. The proposed method generates zero-knowledge computational proofs of model outputs over datasets, allowing for verifiable evaluation attestations that show that models with fixed private weights achieve stated performance or fairness metrics over public inputs. The system is flexible and can be applied to any standard neural network model, with varying compute requirements. The authors demonstrate the effectiveness of their approach across a range of real-world models, highlighting key challenges and design solutions. This work introduces a new paradigm of transparency in the verifiable evaluation of private ML models, ensuring that model evaluations are transparent, accountable, and robust, particularly in high-stakes scenarios where model reliability and fairness are crucial.This paper presents a method for verifiable evaluation of machine learning (ML) models using zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs). The authors address the challenge of verifying the performance claims of closed-source ML models, which are increasingly common in commercial applications. The proposed method generates zero-knowledge computational proofs of model outputs over datasets, allowing for verifiable evaluation attestations that show that models with fixed private weights achieve stated performance or fairness metrics over public inputs. The system is flexible and can be applied to any standard neural network model, with varying compute requirements. The authors demonstrate the effectiveness of their approach across a range of real-world models, highlighting key challenges and design solutions. This work introduces a new paradigm of transparency in the verifiable evaluation of private ML models, ensuring that model evaluations are transparent, accountable, and robust, particularly in high-stakes scenarios where model reliability and fairness are crucial.