Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

CIRNet: An Improved Lightweight Convolution Neural Network Architecture with Inverted Residuals for Universal Steganalysis

  • Research Article-Computer Engineering and Computer Science
  • Published:
Arabian Journal for Science and Engineering Aims and scope Submit manuscript

Abstract

Steganalysis techniques play a crucial role in forensic investigation, aiding professionals in identifying digital images with unsafe content. This work proposes a novel and improved lightweight deep-learning convolution neural network architecture called CIRNet, which provides universal steganalysis capabilities. CIRNet leverages inverted residual blocks integrated with a self-attention mechanism that effectively minimises the detection error rate and the computational complexity for steganalysis. The inverted residual blocks include lightweight depth-wise and point-wise convolutions supplemented by a self-attention module. This integration strengthens the saliency of feature maps corresponding to the embedding regions while reducing the network’s parameters and the total floating-point operations. Consequently, CIRNet achieves superior accuracy, making it an excellent choice for deployment on lightweight smart mobile systems. Experimental results validate the superiority of the proposed CIRNet over several state-of-the-art deep-learning networks commonly employed in steganalysis. The results further demonstrate the enhanced generalisation ability, particularly when applied to diverse datasets generated from BOSSbase 1.01 and BOWS2 using HUGO, WOW, and S-UNIWARD steganography methods with two distinct payloads. These findings establish CIRNet as an efficient and elite performer in the domain of lightweight steganalysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. http://dde.binghamton.edu/download/

  2. BOWS2—Mendeley Data

  3. https://github.com/tensorflow/tensorflow/issues/32809

References

  1. Chan, C.K.; Cheng, L.M.: Hiding data in images by simple LSB substitution. Pattern Recognit. 37(3), 469–474 (2004). https://doi.org/10.1016/j.patcog.2003.08.007

    Article  Google Scholar 

  2. Swain, G.: Very high capacity image steganography technique using quotient value differencing and LSB substitution. Arab. J. Sci. Eng. 44, 2995–3004 (2019). https://doi.org/10.1007/s13369-018-3372-2

    Article  Google Scholar 

  3. Swain, G.: Adaptive and non-adaptive PVD steganography using overlapped pixel blocks. Arab. J. Sci. Eng. 43, 7549–7562 (2018). https://doi.org/10.1007/s13369-018-3163-9

    Article  Google Scholar 

  4. Pevný, T.; Filler, T.; Bas P.: Using high-dimensional image models to perform highly undetectable steganography. In: International Workshop on Information Hiding, pp. 161–177, Springer (2010). https://doi.org/10.1109/WIFS.2012.6412655

  5. Holub, V.; Fridrich, J.: Designing steganographic distortion using directional filters. In: IEEE International Workshop on Information Forensics and Security (WIFS), pp. 234–239 (2012). https://doi.org/10.1109/WIFS.2012.6412655

  6. Holub, V.; Fridrich, J.; Denemark, T.: Universal distortion function for steganography in an arbitrary domain. EURASIP J. Inf. Secur. 2014(1), 1–13 (2014). https://doi.org/10.1186/1687-417X-2014-1

    Article  Google Scholar 

  7. Sedighi, V.; Cogranne, R.; Fridrich, J.: Content-adaptive steganography by minimizing statistical detectability. IEEE Trans. Inf. Forensics Secur. 11(2), 221–234 (2015). https://doi.org/10.1109/TIFS.2015.2486744

    Article  Google Scholar 

  8. Li, B.; Wang, M.; Huang, J.; Li, X.: A new cost function for spatial image steganography. In IEEE International Conference on Image Processing (ICIP), pp. 4206–4210, IEEE (2014). https://doi.org/10.1109/ICIP.2014.7025854

  9. Guo, L.; Ni, J.; Su, W., et al.: Using statistical image model for JPEG steganography: uniform embedding revisited. IEEE Trans. Inf. Forensics Secur. 10(12), 2669–2680 (2015). https://doi.org/10.1109/TIFS.2015.2473815

    Article  Google Scholar 

  10. Fridrich, J.; Goljan, M.; Du, R.: Detecting LSB steganography in color and gray-scale images. IEEE Multimed. 8(4), 22–28 (2001). https://doi.org/10.1109/93.959097

    Article  Google Scholar 

  11. Fridrich, J.; Kodovsky, J.: Rich models for steganalysis of digital images. IEEE Trans. Inf. Forensics Secur. 7(3), 868–882 (2012). https://doi.org/10.1109/TIFS.2012.2190402

    Article  Google Scholar 

  12. Holub, V.; Fridrich, J.: Random projections of residuals for digital image steganalysis. IEEE Trans. Inf. Forensics Secur. 8(12), 1996–2006 (2013). https://doi.org/10.1109/TIFS.2013.2286682

    Article  Google Scholar 

  13. Denemark, T.; Sedighi, V.; Holub, V.; et al.: Selection-channel-aware rich model for steganalysis of digital images. In: 2014 IEEE International Workshop on Information Forensics and Security (WIFS), pp. 48–53, IEEE (2014). https://doi.org/10.1109/WIFS.2014.7084302

  14. Kodovský J., Fridrich J.: Steganalysis of JPEG images using rich models. In: Media Watermarking, Security, and Forensics 2012, pp. 81–93, SPIE (2012). https://doi.org/10.1117/12.907495

  15. Kodovsky, J.; Fridrich, J.; Holub, V.: Ensemble classifiers for steganalysis of digital media. IEEE Trans. Inf. Forensics Secur. 7(2), 432–444 (2011). https://doi.org/10.1109/TIFS.2011.2175919

    Article  Google Scholar 

  16. He, K.; Zhang, X.; Ren, S.; Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90

  17. Sandler, M.; Howard, A.; Zhu, M.; et al.: MobileNetV2: Inverted Residuals and Linear Bottlenecks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018). https://doi.org/10.1109/CVPR.2018.00474

  18. Tan, S.; Li, B.: Stacked convolutional auto-encoders for steganalysis of digital images. In: Signal and Information Processing Association Annual Summit and Conference (APSIPA), Asia-Pacific, pp.1–4, IEEE 2014. https://doi.org/10.1109/APSIPA.2014.7041565

  19. Qian, Y.; Dong, J.; Wang, W.; Tan, T.: Deep learning for steganalysis via convolutional neural networks. In: Media Watermarking, Security, and Forensics, pp. 171–180, SPIE (2015). https://doi.org/10.1117/12.2083479

  20. Xu, G.; Wu, H.Z.; Shi, Y.Q.: Structural design of convolutional neural networks for steganalysis. IEEE Signal Process. Lett. 23(5), 708–712 (2016). https://doi.org/10.1109/LSP.2016.2548421

    Article  Google Scholar 

  21. Ye, J.; Ni, J.; Yi, Y.: Deep learning hierarchical representations for image steganalysis. IEEE Trans. Inf. Forensics Secur. 12(11), 2545–2557 (2017). https://doi.org/10.1109/TIFS.2017.2710946

    Article  Google Scholar 

  22. Yedroudj, M.; Comby, F.; Chaumont, M.: Yedroudj-net: An efficient CNN for spatial steganalysis. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2092–2096, IEEE (2018). https://doi.org/10.1109/ICASSP.2018.8461438

  23. Zhang, R.; Zhu, F.; Liu, J.; Liu, G.: Efficient feature learning and multi-size image steganalysis based on CNN. Preprint arXiv:180711428 (2018). https://doi.org/10.48550/arXiv.1807.11428.

  24. Boroumand, M.; Chen, M.; Fridrich, J.: Deep residual network for steganalysis of digital images. IEEE Trans. Inf. Forensics Secur. 14(15), 1181–1193 (2018). https://doi.org/10.1109/TIFS.2018.2871749

    Article  Google Scholar 

  25. Reinel, T.S.; Brayan, A.A.H.; Alejandro, B.O.M., et al.: GBRAS-Net: a convolutional neural network architecture for spatial image steganalysis. IEEE Access 9, 14340–14350 (2021). https://doi.org/10.1109/ACCESS.2021.3052494

    Article  Google Scholar 

  26. Singhal, A.; Bedi, P.: Multi-class blind steganalysis using deep residual networks. Multimed. Tools Appl. 80(9), 13931–13956 (2021). https://doi.org/10.1007/s11042-020-10353-2

    Article  Google Scholar 

  27. Wu, S.; Zhong, S.; Liu, Y.: Deep residual learning for image steganalysis. Multimed. Tools Appl. 77(9), 10437–10453 (2018). https://doi.org/10.1007/s11042-017-4440-4

    Article  Google Scholar 

  28. Yousfi, Y.; Butora, J.; Khvedchenya, E.; Fridrich, J.: ImageNet pre-trained CNNs for JPEG steganalysis. In: IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1–6, IEEE (2020). https://doi.org/10.1109/WIFS49906.2020.9360897

  29. Ridnik, T.; Lawen, H.; Noy, A.; et al.: Tresnet: High performance gpu-dedicated architecture. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1400–1409 (2021). https://doi.org/10.1109/WACV48630.2021.00144

  30. Li, X.; Wang, W.; Hu, X.; Yang, J.: Selective kernel networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 510–519 (2019). https://doi.org/10.1109/CVPR.2019.00060

  31. Tan, M.; Le, Q.: Efficientnet: Rethinking model scaling for convolutional neural networks. In: International conference on machine learning, PMLR, pp. 6105–6114 (2019). https://doi.org/10.48550/arXiv.1905.11946

  32. Hu, J.; Shen, L.; Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018). https://doi.org/10.1109/CVPR.2018.00745

  33. Yousfi, Y.; Butora, J.; Fridrich, J.; Fuji Tsang, C.: Improving efficientnet for JPEG steganalysis. In: Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security, pp. 149–157 (2021). https://doi.org/10.1145/3437880.3460397

  34. Bianco, S.; Cadene, R.; Celona, L.; Napoletano, P.: Benchmark analysis of representative deep neural network architectures. IEEE Access 6, 64270–64277 (2018). https://doi.org/10.1109/ACCESS.2018.2877890

    Article  Google Scholar 

  35. Cao, Z.; Shih, W.T.; Guo, J.; Wen, C.K.; Jin, S.: Lightweight convolutional neural networks for CSI feedback in massive MIMO. IEEE Commun. Lett. 25(8), 2624–2628 (2021). https://doi.org/10.1109/LCOMM.2021.3076504

    Article  Google Scholar 

  36. Park, S.; Chang, D.E.: Multipath lightweight deep network using randomly selected dilated convolution. Sensors 21(23), 7862 (2021). https://doi.org/10.3390/s21237862

    Article  Google Scholar 

  37. Li, H.; Yue, X.; Zhao, C.; Meng, L.: Lightweight deep neural network from scratch. Appl. Intell. 53, 18868–18886 (2023). https://doi.org/10.1007/s10489-022-04394-3

    Article  Google Scholar 

  38. Bas, P.; Filler, T.; Pevný, T.: “Break our steganographic system”: the ins and outs of organizing BOSS. In: International workshop on information hiding, pp. 59–70, Springer (2011). https://doi.org/10.1007/978-3-642-24178-9_5

  39. Bas, P.; Furon, T.: BOWS-2 Contest (Break Our Watermarking System). Organised within the activity of the Watermarking Virtual Laboratory (Wavila) of the European Network of Excellence ECRYPT (2008)

  40. Ioffe, S.; Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on Machine Learning, PMLR, pp. 448–456 (2015)

  41. Cogranne, R.; Giboulot, Q.; Bas, P.: ALASKA# 2: Challenging academic research on steganalysis with realistic images. In: IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1–5, IEEE (2020). https://doi.org/10.1109/WIFS49906.2020.9360896

  42. Lerch-Hostalot, D.: Aletheia. https://doi.org/10.5281/zenodo.4655945. Accessed 10 Sep 2021

  43. He, K.; Zhang, X.; Ren, S.; Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015). https://doi.org/10.1109/ICCV.2015.123

  44. Fu, T.; Chen, L.; Fu, Z.; Yu, K.; Wang, Y.: CCNet: CNN model with channel attention and convolutional pooling mechanism for spatial image steganalysis. J. Vis. Commun. Image Represent. 88, 103633 (2022). https://doi.org/10.1016/j.jvcir.2022.103633

    Article  Google Scholar 

Download references

Acknowledgements

We acknowledge meaningful discussions with Prof. R.K. Aggarwal, J.N.U., Delhi.

Funding

No funding was received to assist with the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ankita Gupta.

Ethics declarations

Conflict of Interest

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gupta, A., Chhikara, R. & Sharma, P. CIRNet: An Improved Lightweight Convolution Neural Network Architecture with Inverted Residuals for Universal Steganalysis. Arab J Sci Eng 49, 12219–12233 (2024). https://doi.org/10.1007/s13369-023-08630-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13369-023-08630-x

Keywords