Training Classical Neural Networks by Quantum Machine Learning

Training Classical Neural Networks by Quantum Machine Learning

Dated: February 27, 2024 | Chen-Yu Liu,1,2, * En-Jui Kuo,1,3,† Chu-Hsuan Abraham Lin,1,4 Sean Chen,1,5,6 Jason Gemsun Young,7 Yeong-Jar Chang,7 and Min-Hsiu Hsieh1,‡
This paper introduces a novel approach to training classical neural networks (NNs) using quantum machine learning (QML). The method leverages the exponentially large Hilbert space of a quantum system to map a classical NN with \( M \) parameters to a quantum neural network (QNN) with \( O(\text{polylog}(M)) \) rotational gate angles, significantly reducing the number of parameters. Unlike traditional QML methods, the trained QNN can be directly used on classical computers, enhancing practicality and efficiency. The authors demonstrate the effectiveness of their approach through numerical experiments on the MNIST and Iris datasets. They show that the proposed method achieves parameter reduction while maintaining or improving accuracy. The study also investigates the impact of deeper QNNs and the number of measurement shots on the training process. The results indicate that deeper QNNs generally yield better performance, and increasing the number of measurement shots improves accuracy, particularly when they are multiples of the Hilbert space size. The paper concludes by discussing the theoretical basis for the existence of QNN approximations for classical NNs and outlines future research directions, including exploring different QNN ansatzes, optimizing mapping techniques, and investigating more efficient optimization algorithms. The potential of this approach to revolutionize the training of large classical ML models is highlighted, emphasizing its practical implications for everyday applications.This paper introduces a novel approach to training classical neural networks (NNs) using quantum machine learning (QML). The method leverages the exponentially large Hilbert space of a quantum system to map a classical NN with \( M \) parameters to a quantum neural network (QNN) with \( O(\text{polylog}(M)) \) rotational gate angles, significantly reducing the number of parameters. Unlike traditional QML methods, the trained QNN can be directly used on classical computers, enhancing practicality and efficiency. The authors demonstrate the effectiveness of their approach through numerical experiments on the MNIST and Iris datasets. They show that the proposed method achieves parameter reduction while maintaining or improving accuracy. The study also investigates the impact of deeper QNNs and the number of measurement shots on the training process. The results indicate that deeper QNNs generally yield better performance, and increasing the number of measurement shots improves accuracy, particularly when they are multiples of the Hilbert space size. The paper concludes by discussing the theoretical basis for the existence of QNN approximations for classical NNs and outlines future research directions, including exploring different QNN ansatzes, optimizing mapping techniques, and investigating more efficient optimization algorithms. The potential of this approach to revolutionize the training of large classical ML models is highlighted, emphasizing its practical implications for everyday applications.
Reach us at info@study.space