Abstract
Script identification is an essential part of a document image analysis system, since documents written in different scripts may undergo different processing methods. In this paper, we address the issue of script identification in camera-based document images, which is challenging since the camera-based document images are often subject to perspective distortions, uneven illuminations, etc. We propose a novel network called ScriptNet that is composed of two streams: spatial stream and visual stream. The spatial stream captures the spatial dependencies within the image, while the visual stream describes the appearance of the image. The two streams are then fused in the network, which can be trained in an end-to-end manner. Extensive experiments demonstrate the effectiveness of the proposed approach. The two streams have been shown to be complementary to each other. An accuracy of \(99.1\%\) has been achieved by our proposed network, which compares favourably with state-of-the-art methods. Besides, the proposed network achieves promising results even when it is trained with non-camera-based document images and tested on camera-based document images.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Randika, A., Ray, N., Xiao, X., Latimer, A.: Unknown-box approximation to improve optical character recognition performance. In: Proceedings of International Conference on Document Analysis and Recognition, pp. 481–496 (2021)
Ubul, K., Tursun, G., Aysa, A., Impedovo, D., Pirlo, G., Yibulayin, T.: Script identification of multi-script documents: a survey. IEEE Access 5, 6546–6559 (2017)
Hangarge, M., Santosh, K., Pardeshi, R.: Directional discrete cosine transform for handwritten script identification. In: Proceedings of International Conference on Document Analysis and Recognition, pp. 344–348 (2013)
Sharma, N., Pal, U., Blumenstein, M.: A study on word-level multi-script identification from video frames. In: Proceedings of International Joint Conference on Neural Networks, pp. 1827–1833 (2014)
Ferrer, M.A., Morales, A., Pal, U.: LBP based line-wise script identification. In: Proceedings of International Conference on Document Analysis and Recognition, pp. 369–373 (2013)
Shivakumara, P., Sharma, N., Pal, U., Blumenstein, M., Tan, C.L.: Gradient-angular-features for word-wise video script identification. In: Proceedings of International Conference on Pattern Recognition, pp. 3098–3103 (2014)
Dong, S., Wang, P., Abbas, K.: A survey on deep learning and its applications. Comput. Sci. Rev. 40, 100379 (2021)
Vaquero, L., Brea, V.M., Mucientes, M.: Tracking more than 100 arbitrary objects at 25 FPS through deep learning. Pattern Recogn. 121, 108205 (2022)
Mei, J., Dai, L., Shi, B., Bai, X.: Scene text script identification with convolutional recurrent neural networks. In: Proceedings of International Conference on Pattern Recognition, pp. 4053–4058 (2016)
Cheng, C., Huang, Q., Bai, X., Feng, B., Liu, W.: Patch aggregator for scene text script identification. In: Proceedings of International Conference on Document Analysis and Recognition, pp. 1077–1083 (2019)
Ma, M., Wang, Q.F., Huang, S., Huang, S., Goulermas, Y., Huang, K.: Residual attention-based multi-scale script identification in scene text images. Neurocomputing 421, 222–233 (2021)
Bhunia, A.K., Mukherjee, S., Sain, A., Bhunia, A.K., Roy, P.P., Pal, U.: Indic handwritten script identification using offline-online multi-modal deep network. Inf. Fusion 57, 1–14 (2020)
Ghosh, M., Mukherjee, H., Obaidullah, S.M., Santosh, K., Das, N., Roy, K.: LWSINet: a deep learning-based approach towards video script identification. Multimedia Tools Appl. 80(19), 29095–29128 (2021)
Cheikhrouhou, A., Kessentini, Y., Kanoun, S.: Multi-task learning for simultaneous script identification and keyword spotting in document images. Pattern Recogn. 113, 107832 (2021)
Bhunia, A.K., Konwer, A., Bhunia, A.K., Bhowmick, A., Roy, P.P., Pal, U.: Script identification in natural scene image and video frames using an attention based Convolutional-LSTM network. Pattern Recogn. 85, 172–184 (2019)
Li, L., Tan, C.L.: Script identification of camera-based images. In: Proceedings of International Conference on Pattern Recognition, pp. 1–4 (2008)
Dhandra, B., Mallappa, S., Mukarambi, G.: Script identification of camera based bilingual document images using SFTA features. Int. J. Technol. Human Interact. 15(4), 1–12 (2019)
Dileep, P., et al.: An automatic heart disease prediction using cluster-based bi-directional LSTM (C-BiLSTM) algorithm. Neural Comput. Appl. 35, 1–14 (2022). https://doi.org/10.1007/s00521-022-07064-0
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1097–1105 (2012)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of International Conference on Learning Representations (2015)
Tan, M., Le, Q.: Efficientnet: Rethinking model scaling for convolutional neural networks. In: Proceedings of International Conference on Machine Learning, pp. 6105–6114 (2019)
Zhang, J., Zhao, L., Zeng, J., Qin, P., Wang, Y., Yu, X.: Deep MRI glioma segmentation via multiple guidances and hybrid enhanced-gradient cross-entropy loss. Expert Syst. Appl. 196, 116608 (2022)
Lou, Z., Zhu, W., Wu, W.B.: Beyond sub-gaussian noises: Sharp concentration analysis for stochastic gradient descent. J. Mach. Learn. Res. 23, 1–22 (2022)
Acknowledgments
This work is supported by National Natural Science Foundation of China under Grant 61603256 and the Natural Sciences and Engineering Research Council of Canada.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Deng, M., Ma, H., Liu, L., Qiu, T., Lu, Y., Suen, C.Y. (2023). ScriptNet: A Two Stream CNN for Script Identification in Camera-Based Document Images. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Communications in Computer and Information Science, vol 1793. Springer, Singapore. https://doi.org/10.1007/978-981-99-1645-0_2
Download citation
DOI: https://doi.org/10.1007/978-981-99-1645-0_2
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-1644-3
Online ISBN: 978-981-99-1645-0
eBook Packages: Computer ScienceComputer Science (R0)