October 14-18, 2024 | Kasra Abbaszadeh, Christodoulos Pappas, Jonathan Katz, Dimitrios Papadopoulos
The paper introduces KAIZEN, a zero-knowledge proof of training (zkPoT) designed for deep neural networks (DNNs). KAIZEN enables a prover to demonstrate the correctness of their trained model without revealing any additional information about the model or the dataset. The key contributions of the work include:
1. **Optimized Proof System for Gradient Descent**: KAIZEN proposes an optimized GKR-style (sum/check-based) proof system for gradient descent iterations. This system offers efficient prover costs and succinct verification, making it suitable for DNN training, which involves multiple iterations.
2. **Recursive Composition of Proofs**: The construction uses recursive proof composition, also known as incrementally verifiable computation (IVC), to achieve succinct proofs for the entire training process. This approach ensures that the proof size and verifier time are independent of the number of iterations.
3. **Efficient Aggregation Techniques**: To handle the large circuit size of the verifier algorithm, the paper introduces an efficient aggregation scheme for multivariate polynomial commitments. This scheme reduces the verification overhead and makes the recursive composition of proofs feasible.
4. **Implementation and Evaluation**: KAIZEN is implemented and evaluated on a VGG-11 model with 10 million parameters. The results show that KAIZEN achieves a proven time of 15 minutes per iteration, a proof size of 1.66 megabytes, and a verifier time of 130 milliseconds. These metrics are significantly better than those of generic IVCs, achieving 24× faster proven time and at least 27× lower memory usage.
The paper also discusses the challenges and limitations of existing techniques, such as the lack of strong security guarantees and scalability issues in previous zkPoT constructions for DNNs. By addressing these challenges, KAIZ-EN provides a practical and efficient solution for proving the integrity of DNN training processes.The paper introduces KAIZEN, a zero-knowledge proof of training (zkPoT) designed for deep neural networks (DNNs). KAIZEN enables a prover to demonstrate the correctness of their trained model without revealing any additional information about the model or the dataset. The key contributions of the work include:
1. **Optimized Proof System for Gradient Descent**: KAIZEN proposes an optimized GKR-style (sum/check-based) proof system for gradient descent iterations. This system offers efficient prover costs and succinct verification, making it suitable for DNN training, which involves multiple iterations.
2. **Recursive Composition of Proofs**: The construction uses recursive proof composition, also known as incrementally verifiable computation (IVC), to achieve succinct proofs for the entire training process. This approach ensures that the proof size and verifier time are independent of the number of iterations.
3. **Efficient Aggregation Techniques**: To handle the large circuit size of the verifier algorithm, the paper introduces an efficient aggregation scheme for multivariate polynomial commitments. This scheme reduces the verification overhead and makes the recursive composition of proofs feasible.
4. **Implementation and Evaluation**: KAIZEN is implemented and evaluated on a VGG-11 model with 10 million parameters. The results show that KAIZEN achieves a proven time of 15 minutes per iteration, a proof size of 1.66 megabytes, and a verifier time of 130 milliseconds. These metrics are significantly better than those of generic IVCs, achieving 24× faster proven time and at least 27× lower memory usage.
The paper also discusses the challenges and limitations of existing techniques, such as the lack of strong security guarantees and scalability issues in previous zkPoT constructions for DNNs. By addressing these challenges, KAIZ-EN provides a practical and efficient solution for proving the integrity of DNN training processes.