Explainable artificial intelligence models for mineral prospectivity mapping

Explainable artificial intelligence models for mineral prospectivity mapping

September 2024 | Renguang ZUO, Qiuming CHENG, Ying XU, Fanfan YANG, Yihui XIONG, Ziyi WANG & Oliver P. KREUZER
This article discusses the development of explainable artificial intelligence (XAI) models for mineral prospectivity mapping (MPM). MPM is a tool used to identify areas most likely to contain undiscovered mineral deposits. While AI models have shown promise in MPM, they often lack interpretability, generalizability, and physical consistency. To address these issues, the authors propose a novel workflow that integrates domain knowledge throughout the AI-driven MPM process, from input data to model design and output. This approach aims to create more transparent and explainable AI models for MPM. The study outlines three aspects of interpretability: input data, models, and results. For input data, the authors use feature engineering to preprocess and annotate geological prospecting big data, which are compiled based on conceptual mineral deposit and targeting models. They also use data augmentation and negative samples selection to generate sufficiently interpretable training samples. For model interpretability, key ore-controlling factors are added to a traditional neural network structure to build hidden layers with embedded formation processes of mineral deposits as hard constraints. Additionally, the spatial coupling relationship between known mineral deposits and key ore-controlling factors is added to the loss function of the XAI models to construct soft constraints. For the interpretability of predictive results, the authors use visualization techniques to observe the output of each hidden layer to better understand the extraction and fusion processes of prospecting information. They also use attribution techniques to determine the importance of input variables to understand their contribution to the formation of mineral deposits. The study concludes that XAI models can produce more interpretable results, thereby enhancing the reliability of the delineated targets. The underlying knowledge gained from the XAI model enables feedback and helps better understand complex mineral systems. The development of XAI models for MPM is a promising area for future research aimed at improving MPM.This article discusses the development of explainable artificial intelligence (XAI) models for mineral prospectivity mapping (MPM). MPM is a tool used to identify areas most likely to contain undiscovered mineral deposits. While AI models have shown promise in MPM, they often lack interpretability, generalizability, and physical consistency. To address these issues, the authors propose a novel workflow that integrates domain knowledge throughout the AI-driven MPM process, from input data to model design and output. This approach aims to create more transparent and explainable AI models for MPM. The study outlines three aspects of interpretability: input data, models, and results. For input data, the authors use feature engineering to preprocess and annotate geological prospecting big data, which are compiled based on conceptual mineral deposit and targeting models. They also use data augmentation and negative samples selection to generate sufficiently interpretable training samples. For model interpretability, key ore-controlling factors are added to a traditional neural network structure to build hidden layers with embedded formation processes of mineral deposits as hard constraints. Additionally, the spatial coupling relationship between known mineral deposits and key ore-controlling factors is added to the loss function of the XAI models to construct soft constraints. For the interpretability of predictive results, the authors use visualization techniques to observe the output of each hidden layer to better understand the extraction and fusion processes of prospecting information. They also use attribution techniques to determine the importance of input variables to understand their contribution to the formation of mineral deposits. The study concludes that XAI models can produce more interpretable results, thereby enhancing the reliability of the delineated targets. The underlying knowledge gained from the XAI model enables feedback and helps better understand complex mineral systems. The development of XAI models for MPM is a promising area for future research aimed at improving MPM.
Reach us at info@futurestudyspace.com
[slides] Explainable artificial intelligence models for mineral prospectivity mapping | StudySpace