17 Jan 2024 | Cenk Tüysüz, Su Yeon Chang, Maria Demidik, Karl Jansen, Sofia Vallecorsa, Michele Grossi
This paper investigates the behavior of equivariant quantum neural networks (EQNNs) in the presence of hardware noise. EQNNs are a promising approach in geometric quantum machine learning (GQML), leveraging symmetry to improve trainability and generalization. However, the role of hardware noise in EQNN training has not been explored. The study shows that certain EQNN models can preserve equivariance under Pauli channels, while this is not possible under the amplitude damping (AD) channel. The symmetry breaking grows linearly with the number of layers and noise strength. Numerical simulations and hardware experiments up to 64 qubits support these findings. Strategies such as choosing appropriate representations and adaptive thresholding are proposed to enhance symmetry protection in EQNNs under noise.
The paper explores noise models, including Pauli, depolarizing, and amplitude damping channels. It demonstrates that the AD channel induces a linear symmetry breaking in the number of layers and noise strength, while the Pauli channel does not. The results show that EQNNs experience noise-induced barren plateaus (BPs), which are exponential concentration effects that hinder training. Adaptive thresholding is shown to mitigate these effects by adjusting the decision threshold during training.
Experiments on two-qubit and multi-qubit EQNN models reveal that symmetry breaking is more pronounced under the AD channel. The EQNN-Z-native model performs exceptionally well under both DP and AD channels, attributed to its specific representation that commutes with the AD channel. However, this model faces implementation challenges on current quantum hardware due to limitations in the native gate set.
The study also highlights the importance of symmetry breaking metrics, such as χ² and label misassignment (LM), in quantifying the impact of noise on EQNNs. These metrics show that symmetry breaking grows linearly with the number of layers and noise strength, aligning with theoretical predictions. The results suggest that adaptive thresholding and careful representation choices are crucial for maintaining equivariance in EQNNs under hardware noise. Future research directions include exploring the implications of continuous symmetry groups and improving hardware implementation to reduce symmetry breaking.This paper investigates the behavior of equivariant quantum neural networks (EQNNs) in the presence of hardware noise. EQNNs are a promising approach in geometric quantum machine learning (GQML), leveraging symmetry to improve trainability and generalization. However, the role of hardware noise in EQNN training has not been explored. The study shows that certain EQNN models can preserve equivariance under Pauli channels, while this is not possible under the amplitude damping (AD) channel. The symmetry breaking grows linearly with the number of layers and noise strength. Numerical simulations and hardware experiments up to 64 qubits support these findings. Strategies such as choosing appropriate representations and adaptive thresholding are proposed to enhance symmetry protection in EQNNs under noise.
The paper explores noise models, including Pauli, depolarizing, and amplitude damping channels. It demonstrates that the AD channel induces a linear symmetry breaking in the number of layers and noise strength, while the Pauli channel does not. The results show that EQNNs experience noise-induced barren plateaus (BPs), which are exponential concentration effects that hinder training. Adaptive thresholding is shown to mitigate these effects by adjusting the decision threshold during training.
Experiments on two-qubit and multi-qubit EQNN models reveal that symmetry breaking is more pronounced under the AD channel. The EQNN-Z-native model performs exceptionally well under both DP and AD channels, attributed to its specific representation that commutes with the AD channel. However, this model faces implementation challenges on current quantum hardware due to limitations in the native gate set.
The study also highlights the importance of symmetry breaking metrics, such as χ² and label misassignment (LM), in quantifying the impact of noise on EQNNs. These metrics show that symmetry breaking grows linearly with the number of layers and noise strength, aligning with theoretical predictions. The results suggest that adaptive thresholding and careful representation choices are crucial for maintaining equivariance in EQNNs under hardware noise. Future research directions include exploring the implications of continuous symmetry groups and improving hardware implementation to reduce symmetry breaking.