Abstract
When delivered to the market, machine learning models face new data which are possibly subject to novel characteristics – a phenomenon known as concept drift. As this might lead to performance degradation, it is necessary to detect such drift and, if required, adapt the model accordingly. While a variety of drift detection and adaptation methods exists for standard vectorial data, a suitable treatment of text data is less researched. In this work we present a novel approach which detects and explains drift in text data based on their representation via transformer embeddings.
In a nutshell, the method generates suitable statistical features from the original distribution and the possibly shifted variation. Based on these representations, drift scores can be assigned to individual data points, allowing a visualization and human-readable characterization of the type of drift.
We demonstrate the approach’s effectiveness in reliably detecting drift in several experiments.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Acheampong, F.A., Nunoo-Mensah, H., Chen, W.: Transformer models for text-based emotion detection: a review of BERT-based approaches. Artif. Intell. Rev. 54(8), 5789–5829 (2021)
Berabi, B., He, J., Raychev, V., Vechev, M.: TFix: learning to fix coding errors with a text-to-text transformer. In: International Conference on Machine Learning, pp. 780–791. PMLR (2021)
Bu, L., Alippi, C., Zhao, D.: A PDF-free change detection test based on density difference estimation. IEEE Trans. Neural Netw. Learn. Syst. 29(2), 324–334 (2018). https://doi.org/10.1109/TNNLS.2016.2619909
Chi, Z., Dong, L., Ma, S., Mao, S.H.X.L., Huang, H., Wei, F.: mT6: multilingual pretrained text-to-text transformer with translation pairs. arXiv preprint arXiv:2104.08692 (2021)
Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. CoRR abs/1810.04805 (2018). http://arxiv.org/abs/1810.04805
Dongre, P.B., Malik, L.G.: A review on real time data stream classification and adapting to various concept drift scenarios. In: 2014 IEEE International Advance Computing Conference (IACC), pp. 533–537 (2014). https://doi.org/10.1109/IAdCC.2014.6779381
Feldhans, R., et al.: Drift detection in text data with document embeddings. In: Yin, H., et al. (eds.) IDEAL 2021. LNCS, vol. 13113, pp. 107–118. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-91608-4_11. ISBN 978-3-030-91608-4
Gama, J., Žliobaitė, I., Bifet, A., Pechenizkiy, M., Bouchachia, H.: A survey on concept drift adaptation. ACM Comput. Surv. (CSUR) 46 (2014). https://doi.org/10.1145/2523813
Gama, J.a., Žliobaitundefined, I., Bifet, A., Pechenizkiy, M., Bouchachia, A.: A survey on concept drift adaptation. ACM Comput. Surv. 46(4) (2014). https://doi.org/10.1145/2523813. ISSN 0360-030
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning (2018). https://doi.org/10.1109/DSAA.2018.00018
Gretton, A., Borgwardt, K.M., Rasch, M.J., Schölkopf, B., Smola, A.: A kernel two-sample test. J. Mach. Learn. Res. 13(25), 723–773 (2012). http://jmlr.org/papers/v13/gretton12a.html
Hinder, F., Vaquet, V., Brinkrolf, J., Hammer, B.: Model-based explanations of concept drift. Neurocomputing 555, 126640 (2023). https://doi.org/10.1016/J.NEUCOM.2023.126640
Hu, H., Kantardzic, M., Sethi, T.S.: No free lunch theorem for concept drift detection in streaming data classification: a review. WIREs Data Min. Knowl. Discovery 10(2), e1327 (2020). https://doi.org/10.1002/widm.1327, https://wires.onlinelibrary.wiley.com/doi/abs/10.1002/widm.1327
Jain, S., Wallace, B.C.: Attention is not explanation. In: NAACL (2019)
Liu, S., Le, F., Chakraborty, S., Abdelzaher, T.: On exploring attention-based explanation for transformer models in text classification. In: 2021 IEEE International Conference on Big Data (Big Data), pp. 1193–1203 (2021). https://doi.org/10.1109/BigData52589.2021.9671639
Lu, J., Liu, A., Dong, F., Gu, F., Gama, J., Zhang, G.: Learning under concept drift: a review. IEEE Trans. Knowl. Data Eng. 31(12), 2346–2363 (2019). https://doi.org/10.1109/TKDE.2018.2876857
Lu, J., Liu, A., Dong, F., Gu, F., Gama, J., Zhang, G.: Learning under concept drift: a review. CoRR abs/2004.05785 (2020). https://arxiv.org/abs/2004.05785
Maas, A.L., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning word vectors for sentiment analysis. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142–150, Association for Computational Linguistics, Portland (2011). http://www.aclweb.org/anthology/P11-1015
Madaan, N., Manjunatha, A., Nambiar, H., Goel, A., Saha, D., Bedathur, S.: Detail: a tool to automatically detect and analyze drift in language. In: Proceedings of the 35th Conference on Innovative Applications of Artificial Intelligence (AAAI/IAAI) (2023)
Mahajan, D., Tan, C., Sharma, A.: Preserving causal constraints in counterfactual explanations for machine learning classifiers. CoRR abs/1912.03277 (2019). http://arxiv.org/abs/1912.03277
Michel, P., Levy, O., Neubig, G.: Are sixteen heads really better than one? In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, Curran Associates, Inc. (2019). https://proceedings.neurips.cc/paper/2019/file/2c601ad9d2ff9bc8b282670cdd54f69f-Paper.pdf
Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 607–617. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3351095.3372850. ISBN 978145036936
Nishida, K., Yamauchi, K.: Detecting concept drift using statistical testing. In: Corruble, V., Takeda, M., Suzuki, E. (eds.) DS 2007. LNCS (LNAI), vol. 4755, pp. 264–269. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-75488-6_27
Phan, L.N., et al.: SciFive: a text-to-text transformer model for biomedical literature. arXiv preprint arXiv:2106.03598 (2021)
Schramowski, P., Friedrich, F., Tauchmann, C., Kersting, K.: Interactively generating explanations for transformer language models. CoRR abs/2110.02058 (2021). https://arxiv.org/abs/2110.02058
Vaswani, A., et al.: Attention is all you need (2017). https://arxiv.org/pdf/1706.03762.pdf
Verma, S., Dickerson, J.P., Hines, K.: Counterfactual explanations for machine learning: a review. CoRR abs/2010.10596 (2020). https://arxiv.org/abs/2010.10596
Wiegreffe, S., Pinter, Y.: Attention is not not explanation. CoRR abs/1908.04626 (2019). http://arxiv.org/abs/1908.04626
Wolf, T., et al.: Transformers: state-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45, Association for Computational Linguistics (2020). https://www.aclweb.org/anthology/2020.emnlp-demos.6
Yang, L., Kenny, E.M., Ng, T.L.J., Yang, Y., Smyth, B., Dong, R.: Generating plausible counterfactual explanations for deep transformers in financial text classification. CoRR abs/2010.12512 (2020). https://arxiv.org/abs/2010.12512
Acknowledgments
The authors were supported by SAIL. SAIL is funded by the Ministry of Culture and Science of the State of North Rhine-Westphalia under the grant no NW21-059A.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Feldhans, R., Hammer, B. (2025). Towards Reliable Drift Detection and Explanation in Text Data. In: Julian, V., et al. Intelligent Data Engineering and Automated Learning – IDEAL 2024. IDEAL 2024. Lecture Notes in Computer Science, vol 15346. Springer, Cham. https://doi.org/10.1007/978-3-031-77731-8_28
Download citation
DOI: https://doi.org/10.1007/978-3-031-77731-8_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-77730-1
Online ISBN: 978-3-031-77731-8
eBook Packages: Computer ScienceComputer Science (R0)