DeepH-2: Enhancing deep-learning electronic structure via an equivariant local-coordinate transformer

DeepH-2: Enhancing deep-learning electronic structure via an equivariant local-coordinate transformer

30 Jan 2024 | Yuxiang Wang, He Li, Zechen Tang, Honggeng Tao, Yanzhen Wang, Zilong Yuan, Zezhou Chen, Wenhui Duan, Yong Xu
DeepH-2 is a deep learning framework designed to enhance electronic structure calculations by integrating equivariant local-coordinate transformers. This approach improves the efficiency and accuracy of deep-learning density functional theory (DFT) Hamiltonian predictions, surpassing previous models like DeepH and DeepH-E3. The framework combines equivariant neural networks (ENN), local-coordinate transformations, and transformer models into a unified structure called the equivariant local-coordinate transformer (ELCT). By reducing the non-Abelian SO(3) group to the Abelian SO(2) group through local-coordinate transformations, DeepH-2 enables direct mixing of angular momentum channels and facilitates GPU parallel computing. The ELCT architecture also incorporates multi-head attention mechanisms, enhancing the network's ability to extract diverse features and improve the description of material-electronic structure relationships. DeepH-2 achieves significant improvements in computational efficiency and accuracy by leveraging the sparsity of Clebsch-Gordan coefficients and reducing the computational complexity of tensor products from O(L^6) to O(L^3). This allows the model to handle higher-angular-momentum features more effectively. The framework also incorporates advanced neural network techniques, including equivariant transformers, enhancing its expressive power and scalability. DeepH-2 demonstrates state-of-the-art performance in predicting DFT Hamiltonian matrix elements, achieving sub-meV accuracy across various materials, including monolayer and bilayer graphene and MoS₂. The model's efficiency and scalability make it suitable for large-scale materials studies, and its inherent equivariance ensures the preservation of physical symmetry knowledge. Overall, DeepH-2 represents a significant advancement in deep learning for electronic structure calculations, offering a powerful and efficient framework for high-accuracy material simulations.DeepH-2 is a deep learning framework designed to enhance electronic structure calculations by integrating equivariant local-coordinate transformers. This approach improves the efficiency and accuracy of deep-learning density functional theory (DFT) Hamiltonian predictions, surpassing previous models like DeepH and DeepH-E3. The framework combines equivariant neural networks (ENN), local-coordinate transformations, and transformer models into a unified structure called the equivariant local-coordinate transformer (ELCT). By reducing the non-Abelian SO(3) group to the Abelian SO(2) group through local-coordinate transformations, DeepH-2 enables direct mixing of angular momentum channels and facilitates GPU parallel computing. The ELCT architecture also incorporates multi-head attention mechanisms, enhancing the network's ability to extract diverse features and improve the description of material-electronic structure relationships. DeepH-2 achieves significant improvements in computational efficiency and accuracy by leveraging the sparsity of Clebsch-Gordan coefficients and reducing the computational complexity of tensor products from O(L^6) to O(L^3). This allows the model to handle higher-angular-momentum features more effectively. The framework also incorporates advanced neural network techniques, including equivariant transformers, enhancing its expressive power and scalability. DeepH-2 demonstrates state-of-the-art performance in predicting DFT Hamiltonian matrix elements, achieving sub-meV accuracy across various materials, including monolayer and bilayer graphene and MoS₂. The model's efficiency and scalability make it suitable for large-scale materials studies, and its inherent equivariance ensures the preservation of physical symmetry knowledge. Overall, DeepH-2 represents a significant advancement in deep learning for electronic structure calculations, offering a powerful and efficient framework for high-accuracy material simulations.
Reach us at info@study.space
Understanding DeepH-2%3A Enhancing deep-learning electronic structure via an equivariant local-coordinate transformer