Higher-order explanations of graph neural networks via relevant walks

T Schnake, O Eberle, J Lederer… - IEEE transactions on …, 2021 - ieeexplore.ieee.org
IEEE transactions on pattern analysis and machine intelligence, 2021ieeexplore.ieee.org
Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data.
As GNNs tightly entangle the input graph into the neural network structure, common
explainable AI approaches are not applicable. To a large extent, GNNs have remained black-
boxes for the user so far. In this paper, we show that GNNs can in fact be naturally explained
using higher-order expansions, ie, by identifying groups of edges that jointly contribute to the
prediction. Practically, we find that such explanations can be extracted using a nested …
Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data. As GNNs tightly entangle the input graph into the neural network structure, common explainable AI approaches are not applicable. To a large extent, GNNs have remained black-boxes for the user so far. In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i.e., by identifying groups of edges that jointly contribute to the prediction. Practically, we find that such explanations can be extracted using a nested attribution scheme, where existing techniques such as layer-wise relevance propagation (LRP) can be applied at each step. The output is a collection of walks into the input graph that are relevant for the prediction. Our novel explanation method, which we denote by GNN-LRP, is applicable to a broad range of graph neural networks and lets us extract practically relevant insights on sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.
ieeexplore.ieee.org