Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Suitable and Style-Consistent Multi-Texture Recommendation for Cartoon Illustrations

Published: 16 May 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Texture plays an important role in cartoon illustrations to display object materials and enrich visual experiences. Unfortunately, manually designing and drawing an appropriate texture is not easy even for proficient artists, let alone novice or amateur people. While there exist tons of textures on the Internet, it is not easy to pick an appropriate one using traditional text-based search engines. Although several texture pickers have been proposed, they still require the users to browse the textures by themselves, which is still labor-intensive and time-consuming. In this article, an automatic texture recommendation system is proposed for recommending multiple textures to replace a set of user-specified regions in a cartoon illustration with visually pleasant look. Two measurements, the suitability measurement and the style-consistency measurement, are proposed to make sure that the recommended textures are suitable for cartoon illustration and at the same time mutually consistent in style. The suitability is measured based on the synthesizability, cartoonity, and region fitness of textures. The style-consistency is predicted using a learning-based solution since it is subjective to judge whether two textures are consistent in style. An optimization problem is formulated and solved via the genetic algorithm. Our method is validated on various cartoon illustrations, and convincing results are obtained.

    References

    [1]
    Vincent Andrearczyk and Paul F. Whelan. 2016. Using filter banks in convolutional neural networks for texture classification. Pattern Recognition Letters 84 (2016), 63–69.
    [2]
    Pascal Barla, Simon Breslav, Joëlle Thollot, François X. Sillion, and Lee Markosian. 2006. Stroke pattern analysis and synthesis. Computer Graphics Forum 25, 3 (2006), 663–671.
    [3]
    Xingyuan Bu, Yuwei Wu, Zhi Gao, and Yunde Jia. 2019. Deep convolutional network with locality and sparsity constraints for texture classification. Pattern Recognition 91 (2019), 34–46.
    [4]
    Bin Chen, Lingyan Ruan, and Miu-Ling Lam. 2020. LFGAN: 4D light field synthesis from a single RGB image. ACM Transactions on Multimedia Computing, Communications, and Applications 16, 1 (2020), 1–20.
    [5]
    Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, and Andrea Vedaldi. 2016. Deep filter banks for texture recognition, description, and segmentation. International Journal of Computer Vision 118, 1 (2016), 65–94.
    [6]
    Dengxin Dai, Hayko Riemenschneider, and Luc Van Gool. 2014. The synthesizability of texture examples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 3027–3034.
    [7]
    W. Dong, F. Wu, Y. Kong, X. Mei, T. Lee, and X. Zhang. 2016. Image retargeting by texture-aware synthesis. IEEE Transactions on Visualization and Computer Graphics 22, 2 (2016), 1088–1101.
    [8]
    Chichen Fu, Di Chen, Edward J. Delp, Zoe Liu, and Fengqing Zhu. 2018. Texture segmentation based video compression using convolutional neural networks. arXiv:1802.02992 (2018).
    [9]
    Yunfei Fu, Hongchuan Yu, Chih-Kuo Yeh, Tong-Yee Lee, and Jian J. Zhang. 2021. Fast accurate and automatic brushstroke extraction. ACM Transactions on Multimedia Computing, Communications, and Applications 17, 2 (2021), 1–24.
    [10]
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
    [11]
    Trang-Thi Ho, John Jethro Virtusio, Yung-Yao Chen, Chih-Ming Hsu, and Kai-Lung Hua. 2020. Sketch-guided deep portrait generation. ACM Transactions on Multimedia Computing, Communications, and Applications 16, 3 (2020), 1–18.
    [12]
    Ken Ishibashi. 2018. Interactive texture chooser using interactive evolutionary computation and similarity search. In Proceedings of 2018 Nicograph International (NicoInc ’18). 37–44.
    [13]
    Rubaiat Habib Kazi, Takeo Igarashi, Shengdong Zhao, and Richard C. Davis. 2012. Vignette: Interactive texture design and manipulation with freeform gestures for pen-and-ink illustration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1727–1736.
    [14]
    Johannes Kopf and Dani Lischinski. 2012. Digital reconstruction of halftoned color comics. ACM Transactions on Graphics 31, 6 (2012), Article 140, 10 pages.
    [15]
    Vivek Kwatra, Arno Schödl, Irfan A. Essa, Greg Turk, and Aaron F. Bobick. 2003. Graphcut textures: Image and video synthesis using graph cuts. ACM Transactions on Graphics 22, 3 (2003), 277–286.
    [16]
    Hui Lai, Lulu Yin, Huisi Wu, and Zhenkun Wen. 2017. A novel texture exemplars extraction approach based on patches homogeneity and defect detection. In Advances in Multimedia Information Processing—PCM 2017. Lecture Notes in Computer Science, Vol. 10736. Springer, 735–744.
    [17]
    Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. 2005. A sparse texture representation using local affine regions. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 8 (2005), 1265–1278.
    [18]
    Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. 2006. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2169–2178.
    [19]
    Thi-Ngoc-Hanh Le, Ya-Hsuan Chen, and Tong-Yee Lee. 2023. Structure-aware video style transfer with map art. ACM Transactions on Multimedia Computing, Communications, and Applications 19, 3 (2023), 1–24.
    [20]
    Hua Li and David Mould. 2011. Content-sensitive screening in black and white. In Proceedings of the International Conference on Computer Graphics Theory and Applications. 166–172.
    [21]
    W. Li, H. Gong, and R. Yang. 2019. Fast texture mapping adjustment via local/global optimization. IEEE Transactions on Visualization and Computer Graphics 25, 6 (2019), 2296–2303.
    [22]
    Xueting Liu, Chengze Li, and Tien-Tsin Wong. 2017. Boundary-aware texture region segmentation from manga. Computational Visual Media 3, 1 (2017), 61–71.
    [23]
    Aude Oliva and Antonio Torralba. 2001. Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision 42, 3 (2001), 145–175.
    [24]
    Wai-Man Pang. 2010. An intuitive texture picker. In Proceedings of the 15th International Conference on Intelligent User Interfaces. 365–368.
    [25]
    Yingge Qu, Wai-Man Pang, Tien-Tsin Wong, and Pheng-Ann Heng. 2008. Richness-preserving manga screening. ACM Transactions on Graphics 27, 5 (2008), 155.
    [26]
    Yingge Qu, Tien-Tsin Wong, and Pheng-Ann Heng. 2006. Manga colorization. ACM Transactions on Graphics 25, 3 (2006), 1214–1220.
    [27]
    Swalpa Kumar Roy, Shiv Ram Dubey, Bhabatosh Chanda, Bidyut B. Chaudhuri, and Dipak Kumar Ghosh. 2018. TexFusionNet: An ensemble of deep CNN feature for texture classification. In Proceedings of 3rd International Conference on Computer Vision and Image Processing.Advances in Intelligent Systems and Computing, Vol. 1024. Springer, 271–283.
    [28]
    Omry Sendik and Daniel Cohen-Or. 2017. Deep correlations for texture synthesis. ACM Transactions on Graphics 36, 5 (2017), Article 161, 15 pages.
    [29]
    Gil Shamai, Ron Slossberg, and Ron Kimmel. 2019. Synthesizing facial photometries and corresponding geometries using generative adversarial networks. ACM Transactions on Multimedia Computing, Communications, and Applications 15, 3s (2019), 1–24.
    [30]
    Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
    [31]
    Mingxing Tan and Quoc V. Le. 2019. EfficientNet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946 (2019).
    [32]
    Tao Tian, Hanli Wang, Sam Kwong, and C.-C. Jay Kuo. 2021. Perceptual image compression with block-level just noticeable difference prediction. ACM Transactions on Multimedia Computing, Communications, and Applications 16, 4 (2021), 1–15.
    [33]
    Koki Tsubota, Daiki Ikami, and Kiyoharu Aizawa. 2019. Synthesis of screentone patterns of manga characters. In Proceedings of the 2019 IEEE International Symposium on Multimedia. 212–215.
    [34]
    Huisi Wu, Xiaomeng Lyu, and Zhenkun Wen. 2018. Automatic texture exemplar extraction based on global and local textureness measures. Computational Visual Media 4, 2 (2018), 173–184.
    [35]
    Huisi Wu, Zhaoze Wang, Zhuoying Li, Zhenkun Wen, and Jing Qin. 2023. Context prior guided semantic modeling for biomedical image segmentation. ACM Transactions on Multimedia Computing, Communications, and Applications 19, 2s (2023), 1–19.
    [36]
    Huisi Wu, Yilin Wu, Shenglong Zhang, Ping Li, and Zhenkun Wen. 2016. Cartoon image segmentation based on improved SLIC superpixels and adaptive region propagation merging. In Proceedings of the 2016 IEEE International Conference on Signal and Image Processing. 277–281.
    [37]
    Chenggang Yan, Lixuan Meng, Liang Li, Jiehua Zhang, Zhan Wang, Jian Yin, Jiyong Zhang, Yaoqi Sun, and Bolun Zheng. 2022. Age-invariant face recognition by multi-feature fusion and decomposition with self-attention. ACM Transactions on Multimedia Computing, Communications, and Applications 18, 1s (2022), 1–18.
    [38]
    Han Yan, Haijun Zhang, Jianyang Shi, Jianghong Ma, and Xiaofei Xu. 2023. Toward intelligent fashion design: A texture and shape disentangled generative adversarial network. ACM Transactions on Multimedia Computing, Communications, and Applications 19, 3 (2023), 1–23.
    [39]
    Feng Yang, Gui-Song Xia, Dengxin Dai, and Liangpei Zhang. 2019. Learning the synthesizability of dynamic texture samples. IEEE Transactions on Image Processing 28, 5 (2019), 2502–2517.
    [40]
    Chih-Yuan Yao, Shih-Hsuan Hung, Guo-Wei Li, I.-Yu Chen, Reza Adhitya, and Yu-Chi Lai. 2017. Manga vectorization and manipulation with procedural simple screentone. IEEE Transactions on Visualization and Computer Graphics 23, 2 (2017), 1070–1084.
    [41]
    Yongqiang Yao, Di Huang, Xudong Yang, Yunhong Wang, and Liming Chen. 2018. Texture and geometry scattering representation-based facial expression recognition in 2D+3D videos. ACM Transactions on Multimedia Computing, Communications, and Applications 14, 1s (2018), 1–23.
    [42]
    Lulu Yin, Hui Lai, Huisi Wu, and Zhenkun Wen. 2017. Repetitiveness metric of exemplar for texture synthesis. In Advances in Multimedia Information Processing—PCM 2017. Lecture Notes in Computer Science, Vol. 10736. Springer, 745–755.
    [43]
    Ning Yu, Connelly Barnes, Eli Shechtman, Sohrab Amirghodsi, and Michal Lukác. 2019. Texture Mixer: A network for controllable synthesis and interpolation of texture. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 12164–12173.
    [44]
    Hang Zhang, Jia Xue, and Kristin J. Dana. 2017. Deep TEN: Texture encoding network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2896–2905.
    [45]
    Wenqiang Zhang, Zilong Huang, Guozhong Luo, Tao Chen, Xinggang Wang, Wenyu Liu, Gang Yu, and Chunhua Shen. 2022. TopFormer: Token pyramid transformer for mobile semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’22). 12083–12093.
    [46]
    Yushu Zhang, Nuo Chen, Shuren Qi, Mingfu Xue, and Zhongyun Hua. 2023. Detection of recolored image by texture features in chrominance components. ACM Transactions on Multimedia Computing, Communications, and Applications 19, 3 (2023), 1–23.
    [47]
    Mingliang Zhou, Hongyue Leng, Bin Fang, Tao Xiang, Xuekai Wei, and Weijia Jia. 2023. Low-light image enhancement via a frequency-based model with structure and texture decomposition. ACM Transactions on Multimedia Computing, Communications, and Applications 19, 6 (2023), 1–23.
    [48]
    L. Zhu, X. Hu, C. Fu, J. Qin, and P. Heng. 2020. Saliency-aware texture smoothing. IEEE Transactions on Visualization and Computer Graphics 26, 7 (2020), 2471–2484.
    [49]
    Daniel Sýkora, Mirela Ben-Chen, Martin Cadık, Brian Whited, and Maryann Simmons. 2011. TexToons: practical texture mapping for hand-drawn cartoon animations. In Proceedings of the 9th International Symposium on Non-Photorealistic Animation and Rendering. 75–84.

    Index Terms

    1. Suitable and Style-Consistent Multi-Texture Recommendation for Cartoon Illustrations

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 20, Issue 7
      July 2024
      973 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3613662
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 16 May 2024
      Online AM: 12 March 2024
      Accepted: 09 March 2024
      Revised: 05 January 2024
      Received: 14 June 2023
      Published in TOMM Volume 20, Issue 7

      Check for updates

      Author Tags

      1. Texture recommendation
      2. texture replacement
      3. cartoon texture
      4. texture style-consistency
      5. texture synthesis

      Qualifiers

      • Research-article

      Funding Sources

      • National Natural Science Foundation of China
      • National Science and Technology Council

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 118
        Total Downloads
      • Downloads (Last 12 months)118
      • Downloads (Last 6 weeks)34
      Reflects downloads up to 27 Jul 2024

      Other Metrics

      Citations

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media