Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3581783.3611766acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Stroke-based Neural Painting and Stylization with Dynamically Predicted Painting Region

Published: 27 October 2023 Publication History

Abstract

Stroke-based rendering aims to recreate an image with a set of strokes. Most existing methods render complex images using an uniform-block-dividing strategy, which leads to boundary inconsistency artifacts. To solve the problem, we propose Compositional Neural Painter, a novel stroke-based rendering framework which dynamically predicts the next painting region based on the current canvas, instead of dividing the image plane uniformly into painting regions. We start from an empty canvas and divide the painting process into several steps. At each step, a compositor network trained with a phasic RL strategy first predicts the next painting region, then a painter network trained with a WGAN discriminator predicts stroke parameters, and a stroke renderer paints the strokes onto the painting region of the current canvas. Moreover, we extend our method to stroke-based style transfer with a novel differentiable distance transform loss, which helps preserve the structure of the input image during stroke-based stylization. Extensive experiments show our model outperforms the existing models in both stroke-based neural painting and stroke-based stylization.

References

[1]
Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein generative adversarial networks. In International conference on machine learning. PMLR, 214--223.
[2]
Kaidi Cao, Jing Liao, and Lu Yuan. 2018. Carigans: Unpaired photo-to-caricature translation. arXiv preprint arXiv:1811.00222 (2018).
[3]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. Ieee, 248--255.
[4]
Yaroslav Ganin, Tejas Kulkarni, Igor Babuschkin, SM Ali Eslami, and Oriol Vinyals. 2018. Synthesizing programs for images using reinforced adversarial learning. In International Conference on Machine Learning. PMLR, 1666--1675.
[5]
Leon A Gatys, Alexander S Ecker, and Matthias Bethge. 2016. Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2414--2423.
[6]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. NeurIPS, Vol. 27.
[7]
Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 (2013).
[8]
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. 2017. Improved training of wasserstein gans. Advances in neural information processing systems, Vol. 30 (2017).
[9]
David Ha and Douglas Eck. 2017. A neural representation of sketch drawings. arXiv preprint arXiv:1704.03477 (2017).
[10]
Paul Haeberli. 1990. Paint by numbers: Abstract image representations. In Proceedings of the 17th annual conference on Computer graphics and interactive techniques. 207--214.
[11]
Aaron Hertzmann. 1998. Painterly rendering with curved brush strokes of multiple sizes. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques. 453--460.
[12]
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, Vol. 33 (2020), 6840--6851.
[13]
Xun Huang and Serge Belongie. 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision. 1501--1510.
[14]
Zhewei Huang, Wen Heng, and Shuchang Zhou. 2019. Learning to paint with model-based deep reinforcement learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 8709--8718.
[15]
Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In International Conference on Learning Representations.
[16]
Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
[17]
Dmytro Kotovenko, Matthias Wright, Arthur Heimbrecht, and Bjorn Ommer. 2021. Rethinking style transfer: From pixels to parameterized brushstrokes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12196--12205.
[18]
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015).
[19]
Peter Litwinowicz. 1997. Processing images and video for an impressionist effect. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques. 407--414.
[20]
Songhua Liu, Tianwei Lin, Dongliang He, Fu Li, Ruifeng Deng, Xin Li, Errui Ding, and Hao Wang. 2021a. Paint transformer: Feed forward neural painting with stroke prediction. In Proceedings of the IEEE/CVF international conference on computer vision. 6598--6607.
[21]
Songhua Liu, Tianwei Lin, Dongliang He, Fu Li, Meiling Wang, Xin Li, Zhengxing Sun, Qian Li, and Errui Ding. 2021b. Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In Proceedings of the IEEE/CVF international conference on computer vision. 6649--6658.
[22]
Jaskirat Singh, Cameron Smith, Jose Echevarria, and Liang Zheng. 2022. Intelli-Paint: Towards Developing More Human-Intelligible Painting Agents. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XVI. Springer, 685--701.
[23]
Jaskirat Singh and Liang Zheng. 2021. Combining semantic guidance and deep reinforcement learning for generating human level paintings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16387--16396.
[24]
Guoxian Song, Linjie Luo, Jing Liu, Wan-Chun Ma, Chunpong Lai, Chuanxia Zheng, and Tat-Jen Cham. 2021. AgileGAN: stylizing portraits by inversion-consistent transfer learning. ACM Transactions on Graphics (TOG), Vol. 40, 4 (2021), 1--13.
[25]
Daniel Teece. 1998. 3d painting for non-photorealistic rendering. In ACM SIGGRAPH 98 Conference abstracts and applications. 248.
[26]
Zhengyan Tong, Xiaohang Wang, Shengchao Yuan, Xuanhong Chen, Junjie Wang, and Xiangzhong Fang. 2022. Im2Oil: Stroke-Based Oil Painting Rendering with Linearly Controllable Fineness Via Adaptive Sampling. In Proceedings of the 30th ACM International Conference on Multimedia. 1035--1046.
[27]
Greg Turk and David Banks. 1996. Image-guided streamline placement. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. 453--460.
[28]
Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR. 586--595.
[29]
Tao Zhou, Chen Fang, Zhaowen Wang, Jimei Yang, Byungmoon Kim, Zhili Chen, Jonathan Brandt, and Demetri Terzopoulos. 2018. Learning to sketch with deep q networks and demonstrated strokes. arXiv preprint arXiv:1810.05977 (2018).
[30]
Zhengxia Zou, Tianyang Shi, Shuang Qiu, Yi Yuan, and Zhenwei Shi. 2021. Stylized neural painting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15689--15698.

Cited By

View all

Index Terms

  1. Stroke-based Neural Painting and Stylization with Dynamically Predicted Painting Region

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '23: Proceedings of the 31st ACM International Conference on Multimedia
    October 2023
    9913 pages
    ISBN:9798400701085
    DOI:10.1145/3581783
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 October 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. distance transform
    2. phasic rl strategy
    3. stroke-based rendering

    Qualifiers

    • Research-article

    Funding Sources

    • Young Elite Scientists Sponsorship Program by CAST
    • The Fundamental Research Funds for the Central Universities
    • Shanghai Sailing Program
    • Shanghai Municipal Science and Technology Major Project
    • CCF-Tencent Open Research Fund
    • Beijing Natural Science Foundation
    • Shanghai Science and Technology Commision
    • National Natural Science Foundation of China

    Conference

    MM '23
    Sponsor:
    MM '23: The 31st ACM International Conference on Multimedia
    October 29 - November 3, 2023
    Ottawa ON, Canada

    Acceptance Rates

    Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)146
    • Downloads (Last 6 weeks)14
    Reflects downloads up to 06 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Curved-stroke-based neural painting and stylization through thin plate spline interpolationSCIENTIA SINICA Informationis10.1360/SSI-2023-019454:2(301)Online publication date: 5-Feb-2024
    • (2024)MambaPainter: Neural Stroke-Based Rendering in a Single StepSIGGRAPH Asia 2024 Posters10.1145/3681756.3697906(1-2)Online publication date: 3-Dec-2024
    • (2024)ProcessPainter: Learning to draw from sequence dataSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687596(1-10)Online publication date: 3-Dec-2024
    • (2024)Inverse Painting: Reconstructing The Painting ProcessSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687574(1-11)Online publication date: 3-Dec-2024
    • (2024)Towards Artist-Like Painting Agents with Multi-Granularity Semantic AlignmentProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681245(10191-10199)Online publication date: 28-Oct-2024
    • (2024)Learning Realistic Sketching: A Dual-agent Reinforcement Learning ApproachProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680759(5921-5929)Online publication date: 28-Oct-2024
    • (2024)SAMVG: A Multi-Stage Image Vectorization Model with the Segment-Anything ModelICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP48485.2024.10447396(4350-4354)Online publication date: 14-Apr-2024
    • (2024)SuperSVG: Superpixel-Based Scalable Vector Graphics Synthesis2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.02351(24892-24901)Online publication date: 16-Jun-2024
    • (2024)Automatic Stylized Action Generation in Animation Using Deep LearningIEEE Access10.1109/ACCESS.2024.348602412(188773-188786)Online publication date: 2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media