10 Jun 2024 | Chen-Yu Liu, En-Jui Kuo, Chu-Hsuan Abraham Lin, Jason Gemsun Young, Yeong-Jar Chang, Min-Hsiu Hsieh, Hsi-Sheng Goan
The Quantum-Train (QT) framework is a novel hybrid quantum-classical machine learning approach that addresses challenges in data encoding, model compression, and inference hardware requirements. By integrating quantum computing with classical machine learning algorithms, QT reduces the number of parameters in classical neural networks (NNs) from M to O(polylog(M)) during training. This is achieved by mapping the weights of a classical NN into the Hilbert space associated with a quantum neural network (QNN). The QT framework allows for efficient training and inference on classical hardware, eliminating the need for quantum computing resources during inference. This significantly lowers the barrier to using quantum machine learning (QML) in practical applications.
The QT framework maps quantum states to classical NN parameters, enabling the training of classical NNs using quantum circuits. The QNN is parameterized with gates such as R_y and CNOT, and its parameters are optimized to minimize a cost function, typically cross-entropy loss for classification tasks. The mapping model, G_γ, is used to derive classical NN parameters from quantum measurement probabilities. This approach allows for efficient parameterization and training of classical NNs with significantly reduced parameter counts.
Numerical results show that QT achieves competitive accuracy with fewer parameters compared to classical and hybrid quantum-classical models. For example, on the MNIST dataset, QT achieves 93.81% testing accuracy with 24.3% of the classical approach's parameters. On CIFAR-10, QT reaches 60.69% testing accuracy with only 8.1% of the classical model's parameters. These results demonstrate the effectiveness of QT in reducing model complexity while maintaining high accuracy.
The QT framework also shows improved generalization performance, with lower generalization errors compared to classical and hybrid models. This is attributed to the reduced parameter count and the ability to handle complex datasets efficiently. The framework is particularly effective in scenarios where classical models face scalability and hardware limitations.
The QT approach is practical for real-world applications due to its ability to operate on classical hardware after training. This makes it a viable solution for widespread adoption, especially in scenarios where quantum computing resources are limited. The framework's efficiency and effectiveness in reducing model complexity and improving generalization make it a promising direction for future research in hybrid quantum-classical machine learning.The Quantum-Train (QT) framework is a novel hybrid quantum-classical machine learning approach that addresses challenges in data encoding, model compression, and inference hardware requirements. By integrating quantum computing with classical machine learning algorithms, QT reduces the number of parameters in classical neural networks (NNs) from M to O(polylog(M)) during training. This is achieved by mapping the weights of a classical NN into the Hilbert space associated with a quantum neural network (QNN). The QT framework allows for efficient training and inference on classical hardware, eliminating the need for quantum computing resources during inference. This significantly lowers the barrier to using quantum machine learning (QML) in practical applications.
The QT framework maps quantum states to classical NN parameters, enabling the training of classical NNs using quantum circuits. The QNN is parameterized with gates such as R_y and CNOT, and its parameters are optimized to minimize a cost function, typically cross-entropy loss for classification tasks. The mapping model, G_γ, is used to derive classical NN parameters from quantum measurement probabilities. This approach allows for efficient parameterization and training of classical NNs with significantly reduced parameter counts.
Numerical results show that QT achieves competitive accuracy with fewer parameters compared to classical and hybrid quantum-classical models. For example, on the MNIST dataset, QT achieves 93.81% testing accuracy with 24.3% of the classical approach's parameters. On CIFAR-10, QT reaches 60.69% testing accuracy with only 8.1% of the classical model's parameters. These results demonstrate the effectiveness of QT in reducing model complexity while maintaining high accuracy.
The QT framework also shows improved generalization performance, with lower generalization errors compared to classical and hybrid models. This is attributed to the reduced parameter count and the ability to handle complex datasets efficiently. The framework is particularly effective in scenarios where classical models face scalability and hardware limitations.
The QT approach is practical for real-world applications due to its ability to operate on classical hardware after training. This makes it a viable solution for widespread adoption, especially in scenarios where quantum computing resources are limited. The framework's efficiency and effectiveness in reducing model complexity and improving generalization make it a promising direction for future research in hybrid quantum-classical machine learning.