DeepH-2: Enhancing deep-learning electronic structure via an equivariant local-coordinate transformer

DeepH-2: Enhancing deep-learning electronic structure via an equivariant local-coordinate transformer

30 Jan 2024 | Yuxiang Wang,1,*, He Li,1,2,*, Zechen Tang,1,*, Honggeng Tao,1 Yanzhen Wang,1 Zilong Yuan,1 Zezhou Chen,1 Wenhui Duan,1,2,3,4,† and Yong Xu1,3,4,†
DeepH-2 is a novel deep learning framework designed to enhance the accuracy and efficiency of density functional theory (DFT) Hamiltonian predictions. The framework integrates local-coordinate transformations and equivariant neural networks, overcoming the limitations of previous models like DeepH and DeepH-E3. DeepH-2 uses an equivariant local-coordinate transformer (ELCT) that reduces the computational complexity from \(O(L^6)\) to \(O(L^3)\), allowing for the inclusion of higher angular momentum features. This reduction in complexity significantly improves the model's performance and efficiency. Through comprehensive experiments on monolayer and bilayer graphene and MoS\(_2\), DeepH-2 demonstrates superior accuracy and efficiency compared to its predecessors, achieving sub-meV prediction accuracy and a 60-fold increase in the number of parameters. The framework's advanced neural network techniques, including equivariant transformers, enable it to handle more complex datasets and pave the way for high-accuracy electronic structure studies of large-scale materials.DeepH-2 is a novel deep learning framework designed to enhance the accuracy and efficiency of density functional theory (DFT) Hamiltonian predictions. The framework integrates local-coordinate transformations and equivariant neural networks, overcoming the limitations of previous models like DeepH and DeepH-E3. DeepH-2 uses an equivariant local-coordinate transformer (ELCT) that reduces the computational complexity from \(O(L^6)\) to \(O(L^3)\), allowing for the inclusion of higher angular momentum features. This reduction in complexity significantly improves the model's performance and efficiency. Through comprehensive experiments on monolayer and bilayer graphene and MoS\(_2\), DeepH-2 demonstrates superior accuracy and efficiency compared to its predecessors, achieving sub-meV prediction accuracy and a 60-fold increase in the number of parameters. The framework's advanced neural network techniques, including equivariant transformers, enable it to handle more complex datasets and pave the way for high-accuracy electronic structure studies of large-scale materials.
Reach us at info@study.space
Understanding DeepH-2%3A Enhancing deep-learning electronic structure via an equivariant local-coordinate transformer