This paper explores the application of differential privacy (DP) in Neural Tangent Kernel (NTK) regression, a popular framework for studying the learning mechanisms of deep neural networks. The authors introduce a "Gaussian Sampling Mechanism" to add positive semi-definite noise to the NTK matrix, ensuring both differential privacy and good test accuracy. They provide theoretical guarantees for the privacy and utility of their approach, demonstrating that NTK regression can maintain high accuracy with a modest privacy budget. Experiments on the CIFAR10 dataset validate these findings, showing that the proposed method effectively balances privacy and accuracy. This work is the first to offer DP guarantees for NTK regression, bridging the gap between practical deep learning models and differential privacy guarantees.This paper explores the application of differential privacy (DP) in Neural Tangent Kernel (NTK) regression, a popular framework for studying the learning mechanisms of deep neural networks. The authors introduce a "Gaussian Sampling Mechanism" to add positive semi-definite noise to the NTK matrix, ensuring both differential privacy and good test accuracy. They provide theoretical guarantees for the privacy and utility of their approach, demonstrating that NTK regression can maintain high accuracy with a modest privacy budget. Experiments on the CIFAR10 dataset validate these findings, showing that the proposed method effectively balances privacy and accuracy. This work is the first to offer DP guarantees for NTK regression, bridging the gap between practical deep learning models and differential privacy guarantees.