Training Classical Neural Networks by Quantum Machine Learning

Training Classical Neural Networks by Quantum Machine Learning

February 27, 2024 | Chen-Yu Liu, En-Jui Kuo, Chu-Hsuan Abraham Lin, Sean Chen, Jason Gemsun Young, Yeong-Jar Chang, Min-Hsiu Hsieh
This paper proposes a novel method to train classical neural networks (NNs) using quantum machine learning (QML). The approach maps a classical NN with M parameters to a quantum neural network (QNN) with O(polylog(M)) rotational gate angles, significantly reducing the number of parameters. These gate angles are updated to train the classical NN, enabling efficient training without requiring quantum computers for inference. The method leverages the exponentially large Hilbert space of a quantum system to achieve this parameter reduction. Numerical results on the MNIST and Iris datasets demonstrate the effectiveness of the approach. The method also investigates the effects of deeper QNNs and the number of measurement shots, and provides a theoretical perspective on the proposed method. The work opens a new branch of QML and offers a practical tool that can greatly enhance the influence of QML, as the trained QML results can benefit classical computing in daily life. The proposed framework maps classical NN weights to the Hilbert space of a quantum state, enabling the tuning of the parameterized quantum state (represented by QNN) to adjust the classical NN weights. This approach reduces the parameter count to O(polylog(M)) compared to the classical NN with M parameters. The trained model is compatible with classical hardware, allowing inference without quantum computers, thus enhancing the practicality of quantum computing. The paper also discusses the theoretical rationale for the existence of QNN approximations for classical NNs, showing that quantum states can be generated by quantum circuits with polynomial depth, which is a sufficient condition for the presence of such QNN structures. The study concludes that the proposed method significantly reduces the number of parameters to be trained, demonstrating its practicality on the MNIST and Iris datasets. The results show that deeper QNNs exhibit superior performance due to increased expressibility, and that the number of measurement shots affects the accuracy of classification tasks. The paper also highlights the potential of parameter reduction on a polylogarithmic scale to revolutionize the training of large classical ML models.This paper proposes a novel method to train classical neural networks (NNs) using quantum machine learning (QML). The approach maps a classical NN with M parameters to a quantum neural network (QNN) with O(polylog(M)) rotational gate angles, significantly reducing the number of parameters. These gate angles are updated to train the classical NN, enabling efficient training without requiring quantum computers for inference. The method leverages the exponentially large Hilbert space of a quantum system to achieve this parameter reduction. Numerical results on the MNIST and Iris datasets demonstrate the effectiveness of the approach. The method also investigates the effects of deeper QNNs and the number of measurement shots, and provides a theoretical perspective on the proposed method. The work opens a new branch of QML and offers a practical tool that can greatly enhance the influence of QML, as the trained QML results can benefit classical computing in daily life. The proposed framework maps classical NN weights to the Hilbert space of a quantum state, enabling the tuning of the parameterized quantum state (represented by QNN) to adjust the classical NN weights. This approach reduces the parameter count to O(polylog(M)) compared to the classical NN with M parameters. The trained model is compatible with classical hardware, allowing inference without quantum computers, thus enhancing the practicality of quantum computing. The paper also discusses the theoretical rationale for the existence of QNN approximations for classical NNs, showing that quantum states can be generated by quantum circuits with polynomial depth, which is a sufficient condition for the presence of such QNN structures. The study concludes that the proposed method significantly reduces the number of parameters to be trained, demonstrating its practicality on the MNIST and Iris datasets. The results show that deeper QNNs exhibit superior performance due to increased expressibility, and that the number of measurement shots affects the accuracy of classification tasks. The paper also highlights the potential of parameter reduction on a polylogarithmic scale to revolutionize the training of large classical ML models.
Reach us at info@study.space
[slides and audio] Training Classical Neural Networks by Quantum Machine Learning