March 15, 2024 | Joseph Bowles, Shah nawaz Ahmed, and Maria Schuld
This paper presents a large-scale benchmark study of 12 popular quantum machine learning models on 6 binary classification tasks, generating 160 individual datasets. The study uses an open-source package based on the PennyLane software framework to systematically test the performance of quantum models against classical machine learning models. The results show that classical models outperform quantum models on small-scale datasets, suggesting that "quantumness" may not be the key factor for small learning tasks. The study also identifies five important questions for quantum model design. The paper highlights the challenges of benchmarking quantum machine learning models, including the impact of experimental design, the small scale of current quantum hardware, and the influence of commercialization on research. It also discusses the importance of scientific rigor in benchmarking, the need for diverse and representative datasets, and the potential biases in benchmarking practices. The study concludes that while quantum machine learning algorithms show promise, there is a need for more robust and critical benchmarking practices to evaluate their performance. The paper also discusses the challenges of scaling quantum models to higher qubit numbers and the computational resources required for benchmarking. The study provides a detailed overview of the models, datasets, and benchmarking procedures used, and highlights the importance of understanding the inductive bias of near-term quantum models and the role of "quantumness" in their performance. The paper also discusses the limitations of current benchmarking practices and the need for more systematic studies of quantum model design.This paper presents a large-scale benchmark study of 12 popular quantum machine learning models on 6 binary classification tasks, generating 160 individual datasets. The study uses an open-source package based on the PennyLane software framework to systematically test the performance of quantum models against classical machine learning models. The results show that classical models outperform quantum models on small-scale datasets, suggesting that "quantumness" may not be the key factor for small learning tasks. The study also identifies five important questions for quantum model design. The paper highlights the challenges of benchmarking quantum machine learning models, including the impact of experimental design, the small scale of current quantum hardware, and the influence of commercialization on research. It also discusses the importance of scientific rigor in benchmarking, the need for diverse and representative datasets, and the potential biases in benchmarking practices. The study concludes that while quantum machine learning algorithms show promise, there is a need for more robust and critical benchmarking practices to evaluate their performance. The paper also discusses the challenges of scaling quantum models to higher qubit numbers and the computational resources required for benchmarking. The study provides a detailed overview of the models, datasets, and benchmarking procedures used, and highlights the importance of understanding the inductive bias of near-term quantum models and the role of "quantumness" in their performance. The paper also discusses the limitations of current benchmarking practices and the need for more systematic studies of quantum model design.