Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

GCAM: lightweight image inpainting via group convolution and attention mechanism

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Recently, image inpainting techniques tend to be more concerned with how to enhance the quality of restoration than with how to function on various platforms with limited processing power. In this paper, we propose a lightweight method that combines group convolution and attention mechanism to improve or replace the traditional convolution module. Group convolution was used to achieve multi-level image inpainting, and the authors proposed the rotating attention mechanism for allocation to deal with the issue of information mobility between channels in traditional convolution processing. The parallel discriminator structure was utilized throughout the network's overall design phase to guarantee both local and global consistency of the image inpainting process. The experimental results can demonstrate that, while the quality of image inpainting has been ensured, the proposed image inpainting network's inference time and resource usage are significantly lower than those of comparable lightweight approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability statements

The authors would like to thank CelebA-HQ, Places2, and Paris SteetView datasets, which allowed us to train and evaluate the proposed model.

References

  1. Harald G (2004) A combined PDE and texture synthesis approach to in-painting. In: Proceedings of 8th European Conference on Computer Vision, Prague, Czech Republic, pp 214–224.

  2. Rane SD, Sapiro G, Bertalmio M (2003) Structure and texture filling-in of missing image blocks in wireless transmission and compression applications. IEEE Trans Image Process 12(3):296–303

    Article  MathSciNet  Google Scholar 

  3. Criminisi A, Pérez P, Toyama K (2004) Region filling and object removal by exemplar-based image inpainting. IEEE Trans Image Process 13(9):1200–1212

    Article  Google Scholar 

  4. Shen J, Chan TF (2002) Mathematical models for local nontexture inpaintings. J SIAM Appl Math 62(3):1019–1043

    Article  MathSciNet  Google Scholar 

  5. Hays J, Efros AA (2007) Scene completion using millions of photographs. ACM Trans Graph 26(3):4

    Article  Google Scholar 

  6. Whyte O, Sivic J, Zisserman A (2009) Get out of my picture! Internet-based inpainting. In: British Machine Vision Conference (BMVC). London, UK, pp 1–11.

  7. Wadhwa G, Dhall A, Murala S, Tariq U (2021) Hyperrealistic image inpainting with hypergraphs. In: Proceedings of 2021 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, USA, pp 3912–3921.

  8. Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434.

  9. Yu J H, Lin Z, Yang J M, Shen X H, Lu X, Huang T S (2019) Free-form image inpainting with gated convolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South), pp 4470–4479.

  10. Zeng Y H, Fu J L, Chao H Y, Guo B N (2019) Learning pyramid-context encoder network for high-quality image inpainting. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA, pp 1486–1494.

  11. Pathak D, Krähenbühl P, Donahue J, Darrell T, Efros A A (2016) Context encoders: Feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NA, USA, pp 2536–2544.

  12. Yu J H, Lin Z, Yang J M, Shen X H, Huang T S (2018) Generative image inpainting with contextual attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA, pp 5505–5514.

  13. Broad T, Grierson M (2016) Light field completion using focal stack propagation. In: SIGGRAPH’16: Special Interest Group on Computer Graphics and Interactive Techniques Conference. Anaheim, CA, USA, pp 54: 1–54:2.

  14. Sagong M, Shin Y G, Kim S W, Park S, Ko S J. PEPSI: Fast image inpainting with parallel decoding network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA, 2019: 11360–11368.

  15. Khan S H, Naseer M, Hayat M, Zamir S W, Khan F S, Shah M (2022) Transformers in vision: A survey. ACM Computing Surveys, 54(10s): 200: 1–200: 41.

  16. Lin TY, Wang YX, Liu XY, Qiu XP (2022) A Survey of Transformers AI Open 3:111–132

    Article  Google Scholar 

  17. Chen Q W, Zhao H, Li W, Huang P P, Ou W W (2019) Behavior sequence transformer for e-commerce recommendation in Alibaba. In: Proceedings of the 1st International Workshop on Deep Learning Practice for High-Dimensional Sparse Data. Anchorage, arXiv: 1905.06874.

  18. Howard A G, Zhu M, Chen B, Kalenichenko D, Wang W J, Weyand T, Andreetto M, Adam H (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv: 1704.04861.

  19. Szegedy C, Liu W, Jia Y Q, Sermanet P, Reed S E, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA, pp 1–9.

  20. Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90

    Article  Google Scholar 

  21. Ma N N, Zhang X Y, Zheng H T, Sun J (2018) ShuffleNet V2: Practical guidelines for efficient cnn architecture design. In: Proceedings of 15th European Conference on Computer, Vision, Munich, Germany, pp 122–138.

  22. Hu J, Shen L, Sun G, Wu EH (2020) Squeeze-and-excitation networks. IEEE Trans Pattern Anal Mach Intell 42(8):2011–2023

    Article  Google Scholar 

  23. Liu Z W, Luo P, Wang X G, Tang X O (2015) Deep learning face attributes in the wild. In: Proceedings of the IEEE International Conference on Computer Vision. Santiago, Chile, pp 3730–3738.

  24. Zhou BL, Lapedriza A, Khosla A, Oliva A, Torralba A (2017) Places: A 10 million image database for scene recognition. IEEE Trans Pattern Anal Mach Intell 40(6):1452–1464

    Article  Google Scholar 

  25. Shin YG, Sagong MC, Yeo YJ, Kim SW, Ko SJ (2021) PEPSI++ : fast and lightweight network for image inpainting. IEEE Trans Neural Netw Learn Syst 32(1):252–265

    Article  Google Scholar 

  26. Yang J, Qi Z, Shi Y (2020) Learning to incorporate structure knowledge for image inpainting. Proc AAAI Conf Artif Intell 34:12605–12612

    Google Scholar 

  27. Movassagh AA, Alzubi JA, Gheisari M, Rahimi M, Mohan SK, Abbasi AA, Nabipour N (2023) Artificial neural networks training algorithm integrating invasive weed optimization with differential evolutionary model. J Amb Intell Human Comput 13:6017–6025

    Article  Google Scholar 

  28. Alzubi OA, Alzubi JA, Alweshah M, Qiqieh I, Shami SA, Ramachandran M (2020) An optimal pruning algorithm of classifier ensembles: dynamic programming approach. Neural Comput Appl 32(20):16091–16107

    Article  Google Scholar 

  29. Alzubi JA, Jain R, Alzubi O, Thareja A, Upadhyay Y (2022) Distracted driver detection using compressed energy efficient convolutional neural network. J Intell Fuzzy Syst 42(2):1253–1265

    Article  Google Scholar 

  30. Alzubi JA (2016) Diversity-based boosting algorithm. Int J Adv Comput Sci Appl 7(5):524–529

    Google Scholar 

  31. Doersch C, Singh S, Gupta A, Sivic J, Efros AA (2012) What makes Paris look like Paris? ACM Trans Graph 31(4):1–9

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by A Project Supported by Scientific Research Fund of Hunan Provincial Education Department under Grant 22A0701, China University Innovation Funding—Beslin Smart Education Project under Grant 2022BL055, Aid Program for Science and Technology Innovative Research Team in Higher Educational Institutions of Hunan Province.

Funding

This work is supported by A Project Supported by Scientific Research Fund of Hunan Provincial Education Department under Grant 22A0701, China University Innovation Funding—Beslin Smart Education Project under Grant 2022BL055, Aid Program for Science and Technology Innovative Research Team in Higher Educational Institutions of Hunan Province.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuantao Chen.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, Y., Xia, R., Yang, K. et al. GCAM: lightweight image inpainting via group convolution and attention mechanism. Int. J. Mach. Learn. & Cyber. 15, 1815–1825 (2024). https://doi.org/10.1007/s13042-023-01999-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-023-01999-z

Keywords