Abstract
The attention mechanisms are often used to support an interpretation of neural network based classification of texts by highlighting words to which the network paid attention while making a prediction. Following recent studies, the attention technique does not always provide a faithful explanation of the model. Thus, in this paper we study another idea of prototype-based neural networks. Although for texts they obtain promising results, they may provide explanations in the form of comparisons of whole (potentially long) documents or also run into problems with providing reliable explanations. To overcome it, in this work a new prototype-based convolutional neural architecture for text classification is introduced, which provides predictions’ explanations in the form of similarities to phrases from the training set. The experimental evaluation demonstrates that the proposed network obtains similar classification performance to the black-box convolutional networks while providing faithful explanations. Moreover, it is shown that a new method for dynamic tuning of the number of prototypes introduced in this paper offers performance gains against static tuning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
The purpose of the preprocessing was 1) to binarize the datasets where sentiment was expressed on a scale of 1–5, 2) to balance the size of the datasets, and 3) to balance the number of examples from the positive and negative classes through under-sampling. For more details, please refer to [6].
- 2.
References
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations ICLR (2015)
Bodria, F., Giannotti, F., Guidotti, R., Naretto, F., Pedreschi, D., Rinzivillo, S.: Benchmarking and Survey of Explanation Methods for Black Box Models. arXiv e-prints arXiv:2102.13076 (February 2021)
Chen, C., Li, O., Tao, D., Barnett, A., Rudin, C., Su, J.K.: This looks like that: Deep learning for interpretable image recognition. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 8928–8939 (2019)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 1–42 (2018)
He, R., Lee, W.S., Ng, H.T., Dahlmeier, D.: Effective attention modeling for aspect-level sentiment classification. In: Proceedings of the 27th International Conference on Computational Linguistics, pp. 1121–1131 (2018)
Hong, D., Baek, S., Wang, T.: Interpretable sequence classification via prototype trajectory (July 2020). https://arxiv.org/abs/2007.01777
Hutchinson, B., Prabhakaran, V., Denton, E., Webster, K., Zhong, Y., Denuyl, S.: Social biases in NLP models as barriers for persons with disabilities. In: Proceedings of the 58th ACL, pp. 5491–5501 (2020)
Jain, S., Wallace, B.C.: Attention is not Explanation. In: Proceedings of the NAACL, pp. 3543–3556 (2019)
Lampridis, O., Guidotti, R., Ruggieri, S.: Explaining sentiment classification with synthetic exemplars and counter-exemplars. In: Appice, A., Tsoumakas, G., Manolopoulos, Y., Matwin, S. (eds.) DS 2020. LNCS (LNAI), vol. 12323, pp. 357–373. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61527-7_24
Letarte, G., Paradis, F., Giguère, P., Laviolette, F.: Importance of self-attention for sentiment analysis. In: Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 267–275 (2018)
Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: AAAI (2018)
Ming, Y., Xu, P., Qu, H., Ren, L.: Interpretable and steerable sequence learning via prototypes. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (July 2019)
Molnar, C.: Interpretable Machine Learning (2019). https://christophm.github.io/interpretable-ml-book/
Pennington, J., Socher, R., Manning, C.: GloVe: global vectors for word representation. In: Proceedings of the EMNLP, pp. 1532–1543 (2014)
Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning. In: Workshop on Human Interpretability in Machine Learning at International Conference on Machine Learning (2016)
Samek, W., Müller, K.-R.: Towards explainable artificial intelligence. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 5–22. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_1
Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In: 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing @ NeurIPS 2019 (2019)
Strubell, E., Verga, P., Belanger, D., McCallum, A.: Fast and accurate entity recognition with iterated dilated convolutions. In: Proceedings of EMNLP, pp. 2670–2680 (2017)
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328. PMLR (2017)
Wang, Y., Huang, M., Zhu, X., Zhao, L.: Attention-based LSTM for aspect-level sentiment classification. In: Proceedings of the EMNLP, pp. 606–615 (2016)
Wiegreffe, S., Pinter, Y.: Attention is not explanation. In: Proceedings of the EMNLP-IJCNLP, pp. 11–20 (2019)
Acknowledgments
The authors are grateful to the Poznan Supercomputing and Networking Center for computational resources. The research by Kamil Pluciński and Jerzy Stefanowski was supported by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA no. 952215. Mateusz Lango was supported by the Polish National Science Centre grant no. 2016/22/E/ST6/00299.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Pluciński, K., Lango, M., Stefanowski, J. (2021). Prototypical Convolutional Neural Network for a Phrase-Based Explanation of Sentiment Classification. In: Kamp, M., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol 1524. Springer, Cham. https://doi.org/10.1007/978-3-030-93736-2_35
Download citation
DOI: https://doi.org/10.1007/978-3-030-93736-2_35
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-93735-5
Online ISBN: 978-3-030-93736-2
eBook Packages: Computer ScienceComputer Science (R0)