Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Free access
Just Accepted

Towards Prototype-Based Self-Explainable Graph Neural Network

Online AM: 30 August 2024 Publication History

Abstract

Graph Neural Networks (GNNs) have shown great ability in modeling graph-structured data for various domains. However, GNNs are known as black-box models that lack interpretability. Without understanding their inner working, we cannot fully trust them, which largely limits their adoption in high-stake scenarios. Though some initial efforts have been taken to interpret the predictions of GNNs, they mainly focus on providing post-hoc explanations using an additional explainer, which could misrepresent the true inner working mechanism of the target GNN. The works on self-explainable GNNs are rather limited. Therefore, we study a novel problem of learning prototype-based self-explainable GNNs that can simultaneously give accurate predictions and prototype-based explanations on predictions. We design a framework which can learn prototype graphs that capture representative patterns of each class as class-level explanations. The learned prototypes are also used to simultaneously make prediction for for a test instance and provide instance-level explanation. Extensive experiments on real-world and synthetic datasets show the effectiveness of the proposed framework for both prediction accuracy and explanation quality.

References

[1]
Alvarez-Melis, D., and Jaakkola, T. S. Towards robust interpretability with self-explaining neural networks. arXiv preprint arXiv:1806.07538 (2018).
[2]
Azzolin, S., Longa, A., Barbiero, P., Lio, P., and Passerini, A. Global explainability of GNNs via logic combination of learned concepts. In The Eleventh International Conference on Learning Representations (2023).
[3]
Bai, Y., Ding, H., Bian, S., Chen, T., Sun, Y., and Wang, W. Simgnn: A neural network approach to fast graph similarity computation. In WSDM (2019), pp. 384–392.
[4]
Baldassarre, F., and Azizpour, H. Explainability techniques for graph convolutional networks. arXiv preprint arXiv:1905.13686 (2019).
[5]
Bongini, P., Bianchini, M., and Scarselli, F. Molecular generative graph neural networks for drug discovery. Neurocomputing 450 (2021), 242–252.
[6]
Bruna, J., Zaremba, W., Szlam, A., and LeCun, Y. Spectral networks and locally connected networks on graphs. ICLR (2014).
[7]
Chen, C., Li, O., Tao, D., Barnett, A., Rudin, C., and Su, J. K. This looks like that: deep learning for interpretable image recognition. Advances in neural information processing systems 32 (2019).
[8]
Chen, J., Ma, T., and Xiao, C. Fastgcn: fast learning with graph convolutional networks via importance sampling. ICLR (2018).
[9]
Chen, M., Wei, Z., Huang, Z., Ding, B., and Li, Y. Simple and deep graph convolutional networks. In ICML (2020), PMLR, pp. 1725–1735.
[10]
Chiang, W.-L., Liu, X., Si, S., Li, Y., Bengio, S., and Hsieh, C.-J. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In SIGKDD (2019), pp. 257–266.
[11]
Dai, E., and Wang, S. Towards self-explainable graph neural network. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (2021), pp. 302–311.
[12]
Du, M., Liu, N., Song, Q., and Hu, X. Towards explanation of dnn-based prediction with guided feature inversion. In SIGKDD (2018), pp. 1358–1367.
[13]
Hamilton, W., Ying, Z., and Leskovec, J. Inductive representation learning on large graphs. In NeurIPS (2017), pp. 1024–1034.
[14]
Hind, M., Wei, D., Campbell, M., Codella, N. C., Dhurandhar, A., Mojsilović, A., Natesan Ramamurthy, K., and Varshney, K. R. Ted: Teaching ai to explain its decisions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (2019), pp. 123–129.
[15]
Huang, Q., Yamada, M., Tian, Y., Singh, D., Yin, D., and Chang, Y. Graphlime: Local interpretable model explanations for graph neural networks. arXiv preprint arXiv:2001.06216 (2020).
[16]
Jiang, D., Wu, Z., Hsieh, C.-Y., Chen, G., Liao, B., Wang, Z., Shen, C., Cao, D., Wu, J., and Hou, T. Could graph neural networks learn better molecular representation for drug discovery? a comparison study of descriptor-based and graph-based models. Journal of cheminformatics 13, 1 (2021), 1–23.
[17]
Kim, D., and Oh, A. How to find your friendly neighborhood: Graph attention design with self-supervision. In International Conference on Learning Representations (2021).
[18]
Kipf, T. N., and Welling, M. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
[19]
Kipf, T. N., and Welling, M. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016).
[20]
Levie, R., Monti, F., Bresson, X., and Bronstein, M. M. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. IEEE Transactions on Signal Processing 67, 1 (2018), 97–109.
[21]
Li, G., Muller, M., Thabet, A., and Ghanem, B. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE/CVF International Conference on Computer Vision (2019), pp. 9267–9276.
[22]
Li, O., Liu, H., Chen, C., and Rudin, C. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In Proceedings of the AAAI Conference on Artificial Intelligence (2018), vol. 32.
[23]
Luo, D., Cheng, W., Xu, D., Yu, W., Zong, B., Chen, H., and Zhang, X. Parameterized explainer for graph neural network. Advances in Neural Information Processing Systems 33 (2020).
[24]
Miao, S., Liu, M., and Li, P. Interpretable and generalizable graph learning via stochastic attention mechanism. In International Conference on Machine Learning (2022), PMLR, pp. 15524–15543.
[25]
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. Distributed representations of words and phrases and their compositionality. In NeurIPS (2013), pp. 3111–3119.
[26]
Papernot, N., and McDaniel, P. Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning. arXiv preprint arXiv:1803.04765 (2018).
[27]
Pope, P. E., Kolouri, S., Rostami, M., Martin, C. E., and Hoffmann, H. Explainability methods for graph convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019), pp. 10772–10781.
[28]
Qiu, J., Chen, Q., Dong, Y., Zhang, J., Yang, H., Ding, M., Wang, K., and Tang, J. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2020), pp. 1150–1160.
[29]
Rousseeuw, P. J. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics 20 (1987), 53–65.
[30]
Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1, 5 (2019), 206–215.
[31]
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (2017), pp. 618–626.
[32]
Shu, K., Cui, L., Wang, S., Lee, D., and Liu, H. defend: Explainable fake news detection. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2019), pp. 395–405.
[33]
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., and Bengio, Y. Graph attention networks. ICLR (2018).
[34]
Wang, D., Lin, J., Cui, P., Jia, Q., Wang, Z., Fang, Y., Yu, Q., Zhou, J., Yang, S., and Qi, Y. A semi-supervised graph attentive network for financial fraud detection. In ICDM (2019), IEEE, pp. 598–607.
[35]
Wang, H., Zhang, F., Zhang, M., Leskovec, J., Zhao, M., Li, W., and Wang, Z. Knowledge-aware graph neural networks with label smoothness regularization for recommender systems. In SIGKDD (2019), pp. 968–977.
[36]
Wu, Z., Ramsundar, B., Feinberg, E. N., Gomes, J., Geniesse, C., Pappu, A. S., Leswing, K., and Pande, V. Moleculenet: a benchmark for molecular machine learning. Chemical science 9, 2 (2018), 513–530.
[37]
Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018).
[38]
Xu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K.-i., and Jegelka, S. Representation learning on graphs with jumping knowledge networks. In International Conference on Machine Learning (2018), PMLR, pp. 5453–5462.
[39]
Ying, R., He, R., Chen, K., Eksombatchai, P., Hamilton, W. L., and Leskovec, J. Graph convolutional neural networks for web-scale recommender systems. In SIGKDD (2018), pp. 974–983.
[40]
Ying, Z., Bourgeois, D., You, J., Zitnik, M., and Leskovec, J. Gnnexplainer: Generating explanations for graph neural networks. In Advances in neural information processing systems (2019), pp. 9244–9255.
[41]
You, Y., Chen, T., Sui, Y., Chen, T., Wang, Z., and Shen, Y. Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems 33 (2020).
[42]
Yuan, H., Chen, Y., Hu, X., and Ji, S. Interpreting deep models for text analysis via optimization and regularization methods. In Proceedings of the AAAI Conference on Artificial Intelligence (2019), vol. 33, pp. 5717–5724.
[43]
Yuan, H., Tang, J., Hu, X., and Ji, S. Xgnn: Towards model-level explanations of graph neural networks. In SIGKDD (2020), pp. 430–438.
[44]
Yuan, H., Yu, H., Gui, S., and Ji, S. Explainability in graph neural networks: A taxonomic survey. arXiv preprint arXiv:2012.15445 (2020).
[45]
Yuan, H., Yu, H., Wang, J., Li, K., and Ji, S. On explainability of graph neural networks via subgraph explorations. In Proceedings of the 38th International Conference on Machine Learning (ICML) (2021), pp. 12241–12252.
[46]
Zeiler, M. D., and Fergus, R. Visualizing and understanding convolutional networks. In European conference on computer vision (2014), Springer, pp. 818–833.
[47]
Zhang, Z., Liu, Q., Wang, H., Lu, C., and Lee, C. Protgnn: Towards self-explaining graph neural networks. arXiv preprint arXiv:2112.00911 (2021).
[48]
Zhao, T., Tang, X., Zhang, X., and Wang, S. Semi-supervised graph-to-graph translation. In CIKM (2020), pp. 1863–1872.
[49]
Zhu, Q., Du, B., and Yan, P. Self-supervised training of graph convolutional networks. arXiv preprint arXiv:2006.02380 (2020).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Knowledge Discovery from Data
ACM Transactions on Knowledge Discovery from Data Just Accepted
EISSN:1556-472X
Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Online AM: 30 August 2024
Accepted: 25 July 2024
Revised: 13 January 2024
Received: 09 September 2023

Check for updates

Author Tags

  1. Graph Neural Networks
  2. Self-Explainable
  3. Prototype

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 145
    Total Downloads
  • Downloads (Last 12 months)145
  • Downloads (Last 6 weeks)86
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media