13 Nov 2019 | Rex Ying† Dylan Bourgeois†,‡ Jiaxuan You† Marinka Zitnik† Jure Leskovec†
GNNEXPLAINER is a model-agnostic method for generating interpretable explanations for predictions made by Graph Neural Networks (GNNs) on graph-based machine learning tasks. It identifies a compact subgraph and a subset of node features that are most influential for a GNN's prediction. The method formulates the explanation generation as an optimization problem that maximizes the mutual information between the GNN's prediction and the distribution of possible subgraph structures. GNNEXPLAINER can provide consistent and concise explanations for both single and multi-instance predictions. It learns a graph mask and a feature mask to select important subgraphs and node features. Experiments on synthetic and real-world graphs show that GNNEXPLAINER outperforms alternative baseline approaches by up to 43.0% in explanation accuracy. It provides insights into the structure and features that influence GNN predictions, helping to understand and debug GNN models. GNNEXPLAINER is applicable to various tasks, including node classification, link prediction, and graph classification. It handles both single and multi-instance explanations and can be used to identify important graph structures and node features that affect predictions. The method is effective in explaining GNN predictions and provides a framework for interpreting GNN-based models.GNNEXPLAINER is a model-agnostic method for generating interpretable explanations for predictions made by Graph Neural Networks (GNNs) on graph-based machine learning tasks. It identifies a compact subgraph and a subset of node features that are most influential for a GNN's prediction. The method formulates the explanation generation as an optimization problem that maximizes the mutual information between the GNN's prediction and the distribution of possible subgraph structures. GNNEXPLAINER can provide consistent and concise explanations for both single and multi-instance predictions. It learns a graph mask and a feature mask to select important subgraphs and node features. Experiments on synthetic and real-world graphs show that GNNEXPLAINER outperforms alternative baseline approaches by up to 43.0% in explanation accuracy. It provides insights into the structure and features that influence GNN predictions, helping to understand and debug GNN models. GNNEXPLAINER is applicable to various tasks, including node classification, link prediction, and graph classification. It handles both single and multi-instance explanations and can be used to identify important graph structures and node features that affect predictions. The method is effective in explaining GNN predictions and provides a framework for interpreting GNN-based models.