Universal materials model of deep-learning density functional theory Hamiltonian

Universal materials model of deep-learning density functional theory Hamiltonian

15 Jun 2024 | Yuxiang Wang,1,* Yang Li,1,* Zechen Tang,1,* He Li,1,2 Zilong Yuan,1 Honggeng Tao,1 Nianlong Zou,1 Ting Bao,1 Xinghao Liang,1 Zezhou Chen,1 Shanghua Xu,1 Ce Bian,1 Zhiming Xu,1 Chong Wang,1 Chen Si,5 Wenhui Duan,1,2,3,4 Yong Xu1,3,4,‡
This paper presents a novel approach to developing universal materials models using deep-learning density functional theory (DeepH). The authors construct a large materials database, comprising approximately 10,000 solid materials with diverse elemental compositions and structures, and employ the DeepH-2 method, an advanced equivariant transformer architecture, to train a neural network model. By addressing the "gauge problem" through a gauge-invariant loss function, the model achieves remarkable accuracy in predicting DFT Hamiltonians and material properties. The universal materials model of DeepH demonstrates strong negative correlation between the number of training structures and mean absolute error (MAE), indicating that larger datasets lead to better performance. The authors also demonstrate the feasibility of fine-tuning the universal model for specific materials datasets, such as carbon allotropes, achieving low MAEs and high prediction accuracy. This work lays the foundation for developing large materials models and advancing AI-driven materials discovery.This paper presents a novel approach to developing universal materials models using deep-learning density functional theory (DeepH). The authors construct a large materials database, comprising approximately 10,000 solid materials with diverse elemental compositions and structures, and employ the DeepH-2 method, an advanced equivariant transformer architecture, to train a neural network model. By addressing the "gauge problem" through a gauge-invariant loss function, the model achieves remarkable accuracy in predicting DFT Hamiltonians and material properties. The universal materials model of DeepH demonstrates strong negative correlation between the number of training structures and mean absolute error (MAE), indicating that larger datasets lead to better performance. The authors also demonstrate the feasibility of fine-tuning the universal model for specific materials datasets, such as carbon allotropes, achieving low MAEs and high prediction accuracy. This work lays the foundation for developing large materials models and advancing AI-driven materials discovery.
Reach us at info@study.space