Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

ProSpect: Prompt Spectrum for Attribute-Aware Personalization of Diffusion Models

Published: 05 December 2023 Publication History
  • Get Citation Alerts
  • Abstract

    Personalizing generative models offers a way to guide image generation with user-provided references. Current personalization methods can invert an object or concept into the textual conditioning space and compose new natural sentences for text-to-image diffusion models. However, representing and editing specific visual attributes such as material, style, and layout remains a challenge, leading to a lack of disentanglement and editability. To address this problem, we propose a novel approach that leverages the step-by-step generation process of diffusion models, which generate images from low to high frequency information, providing a new perspective on representing, generating, and editing images. We develop the Prompt Spectrum Space P*, an expanded textual conditioning space, and a new image representation method called ProSpect. ProSpect represents an image as a collection of inverted textual token embeddings encoded from per-stage prompts, where each prompt corresponds to a specific generation stage (i.e., a group of consecutive steps) of the diffusion model. Experimental results demonstrate that P* and ProSpect offer better disentanglement and controllability compared to existing methods. We apply ProSpect in various personalized attribute-aware image generation applications, such as image-guided or text-driven manipulations of materials, style, and layout, achieving previously unattainable results from a single image input without fine-tuning the diffusion models. Our source code is available at https://github.com/zyxElsa/ProSpect.

    Supplementary Material

    ZIP File (papers_346s4-file4.zip)
    supplemental

    References

    [1]
    Art Institute of Chicago. 2023. https://www.artic.edu/ Last accessed on 2023-09-12.
    [2]
    Omri Avrahami, Dani Lischinski, and Ohad Fried. 2022. Blended Diffusion for Text-Driven Editing of Natural Images. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 18208--18218.
    [3]
    Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, Tero Karras, and Ming-Yu Liu. 2022. eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers. arXiv preprint arXiv:2211.01324 (2022).
    [4]
    David Bau, Alex Andonian, Audrey Cui, YeonHwan Park, Ali Jahanian, Aude Oliva, and Antonio Torralba. 2021. Paint by word. arXiv preprint arXiv:2103.10951 (2021).
    [5]
    Andrew Brock, Jeff Donahue, and Karen Simonyan. 2019. Large Scale GAN Training for High Fidelity Natural Image Synthesis. In International Conference on Learning Representations (ICLR).
    [6]
    Tim Brooks, Aleksander Holynski, and Alexei A Efros. 2023. InstructPix2Pix: Learning to Follow Image Editing Instructions. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 18392--18402.
    [7]
    Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T. Freeman, Michael Rubinstein, Yuanzhen Li, and Dilip Krishnan. 2023. Muse: Text-To-Image Generation via Masked Generative Transformers. In International Conference on Machine Learning (ICML).
    [8]
    Min Jin Chong and David Forsyth. 2022. JoJoGAN: One Shot Face Stylization. In European Conference on Computer Vision (ECCV) (Tel Aviv, Israel). Springer-Verlag, Berlin, Heidelberg, 128--152.
    [9]
    Katherine Crowson, Stella Biderman, Daniel Kornis, Dashiell Stander, Eric Hallahan, Louis Castricato, and Edward Raff. 2022. VQGAN-CLIP: Open domain image generation and editing with natural language guidance. In European Conference on Computer Vision (ECCV). Springer, 88--105.
    [10]
    Yingying Deng, Fan Tang, Weiming Dong, Chongyang Ma, Xingjia Pan, Lei Wang, and Changsheng Xu. 2022. StyTr2: Image Style Transfer with Transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 11326--11336.
    [11]
    Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat GANs on image synthesis. In Advances in Neural Information Processing Systems (NeurIPS. 8780--8794.
    [12]
    Patrick Esser, Robin Rombach, and Bjorn Ommer. 2021. Taming Transformers for High-Resolution Image Synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 12873--12883.
    [13]
    Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman. 2022. Make-a-scene: Scene-based text-to-image generation with human priors. In European Conference on Computer Vision (ECCV). Springer, 89--106.
    [14]
    Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. 2023a. An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion. In International Conference on Learning Representations (ICLR).
    [15]
    Rinon Gal, Moab Arar, Yuval Atzmon, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. 2023b. Encoder-based domain tuning for fast personalization of text-to-image models. ACM Transactions on Graphics (TOG) 42, 4 (2023), 1--13.
    [16]
    Rinon Gal, Or Patashnik, Haggai Maron, Amit H. Bermano, Gal Chechik, and Daniel Cohen-Or. 2022. StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators. ACM Transactions on Graphics 41, 4, Article 141 (2022), 13 pages.
    [17]
    Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems (NIPS). Curran Associates, Inc.
    [18]
    Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. 2023. Prompt-to-Prompt Image Editing with Cross Attention Control. In International Conference on Learning Representations (ICLR).
    [19]
    Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Advances in Neural Information Processing Systems (NIPS).
    [20]
    Lianghua Huang, Di Chen, Yu Liu, Yujun Shen, Deli Zhao, and Jingren Zhou. 2023a. Composer: Creative and Controllable Image Synthesis with Composable Conditions. In International Conference on Machine Learning (ICML).
    [21]
    Nisha Huang, Fan Tang, Weiming Dong, Tong-Yee Lee, and Changsheng Xu. 2023b. Region-Aware Diffusion for Zero-shot Text-driven Image Editing. arXiv preprint arXiv:2302.11797 (2023).
    [22]
    Nisha Huang, Fan Tang, Weiming Dong, and Changsheng Xu. 2022a. Draw Your Art Dream: Diverse Digital Art Synthesis with Multimodal Guided Diffusion. In ACM International Conference on Multimedia (Lisboa, Portugal). 1085--1094.
    [23]
    Nisha Huang, Yuxin Zhang, and Weiming Dong. 2023d. Style-A-Video: Agile Diffusion for Arbitrary Text-based Video Style Transfer. arXiv preprint arXiv:2305.05464 (2023).
    [24]
    Nisha Huang, Yuxin Zhang, Fan Tang, Chongyang Ma, Haibin Huang, Yong Zhang, Weiming Dong, and Changsheng Xu. 2022b. DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization. arXiv preprint arXiv:2211.10682 (2022).
    [25]
    Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. 2018. Multimodal Unsupervised Image-to-Image Translation. In European Conference on Computer Vision (ECCV). 172--189.
    [26]
    Ziqi Huang, Tianxing Wu, Yuming Jiang, Kelvin CK Chan, and Ziwei Liu. 2023c. ReVersion: Diffusion-Based Relation Inversion from Images. arXiv preprint arXiv:2303.13495 (2023).
    [27]
    Jaeseok Jeong, Mingi Kwon, and Youngjung Uh. 2023. Training-free Style Transfer Emerges from h-space in Diffusion models. arXiv preprint arXiv:2303.15403 (2023).
    [28]
    Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. 2020. Training Generative Adversarial Networks with Limited Data. In Advances in Neural Information Processing Systems (NeurIPS). 12104--12114.
    [29]
    Tero Karras, Samuli Laine, and Timo Aila. 2019. A Style-Based Generator Architecture for Generative Adversarial Networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 4401--4410.
    [30]
    Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. 2023. Imagic: Text-Based Real Image Editing with Diffusion Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 6007--6017.
    [31]
    Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. 2023a. Multi-Concept Customization of Text-to-Image Diffusion. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
    [32]
    Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. 2023b. Multi-Concept Customization of Text-to-Image Diffusion. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 1931--1941.
    [33]
    Gihyun Kwon and Jong Chul Ye. 2022. CLIPstyler: Image Style Transfer with a Single Text Condition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 18062--18071.
    [34]
    Hsin-Ying Lee, Hung-Yu Tseng, Qi Mao, Jia-Bin Huang, Yu-Ding Lu, Maneesh Singh, and Ming-Hsuan Yang. 2020. Drit++: Diverse image-to-image translation via disentangled representations. International Journal of Computer Vision 128 (2020), 2402--2417.
    [35]
    Senmao Li, Joost van de Weijer, Taihang Hu, Fahad Shahbaz Khan, Qibin Hou, Yaxing Wang, and Jian Yang. 2023. StyleDiffusion: Prompt-Embedding Inversion for Text-Based Editing. arXiv preprint arXiv:2303.15649 (2023).
    [36]
    Wentong Liao, Kai Hu, Michael Ying Yang, and Bodo Rosenhahn. 2022. Text to Image Generation with Semantic-Spatial Aware GAN. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 18187--18196.
    [37]
    Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. 2022. RePaint: Inpainting Using Denoising Diffusion Probabilistic Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 11461--11471.
    [38]
    Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. 2021. Sdedit: Guided image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073 (2021).
    [39]
    Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual associations in GPT. In Advances in Neural Information Processing Systems (NeurIPS). 17359--17372.
    [40]
    Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. 2023. Null-text Inversion for Editing Real Images using Guided Diffusion Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 6038--6047.
    [41]
    Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. 2022. GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models. In International Conference on Machine Learning (ICML).
    [42]
    Alexander Quinn Nichol and Prafulla Dhariwal. 2021. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning (ICML). 8162--8171.
    [43]
    Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei Efros, and Richard Zhang. 2020. Swapping autoencoder for deep image manipulation. Advances in Neural Information Processing Systems 33 (2020), 7198--7211.
    [44]
    Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. 2021. StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery. In IEEE/CVF International Conference on Computer Vision (ICCV). 2085--2094.
    [45]
    Pexels. 2023. https://www.pexels.com Last accessed on 2023-09-12.
    [46]
    Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML). 8748--8763.
    [47]
    Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv preprint arXiv:2204.06125 (2022).
    [48]
    Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In International Conference on Machine Learning (ICML). PMLR, 8821--8831.
    [49]
    Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10684--10695.
    [50]
    Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. 2023. DreamBooth: Fine tuning text-to-image diffusion models for subject-driven generation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 22500--22510.
    [51]
    Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. In Advances in Neural Information Processing Systems (NeurIPS). 36479--36494.
    [52]
    Peter Schaldenbrand, Zhixuan Liu, and Jean Oh. 2022. StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Translation. In International Joint Conference on Artificial Intelligence (IJCAI). 4966--4972.
    [53]
    Krishna Kumar Singh, Utkarsh Ojha, and Yong Jae Lee. 2019. FineGAN: Unsupervised Hierarchical Disentanglement for Fine-Grained Object Generation and Discovery. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 6490--6499.
    [54]
    Ming Tao, Hao Tang, Fei Wu, Xiaoyuan Jing, Bing-Kun Bao, and Changsheng Xu. 2022. DF-GAN: A Simple and Effective Baseline for Text-to-Image Synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 16494--16504.
    [55]
    Yoad Tewel, Rinon Gal, Gal Chechik, and Yuval Atzmon. 2023. Key-Locked Rank One Editing for Text-to-Image Personalization. In ACM SIGGRAPH 2023 Conference Proceedings (Los Angeles, CA, USA) (SIGGRAPH '23). Association for Computing Machinery, New York, NY, USA, Article 12, 11 pages.
    [56]
    The Barnes Foundation. 2023. https://www.barnesfoundation.org/ Last accessed on 2023-09-12.
    [57]
    Dani Valevski, Matan Kalman, Eyal Molad, Eyal Segalis, Yossi Matias, and Yaniv Leviathan. 2023. UniTune: Text-Driven Image Editing by Fine Tuning a Diffusion Model on a Single Image. ACM Transactions on Graphics 42, 4, Article 128 (2023), 10 pages.
    [58]
    Andrey Voynov, Qinghao Chu, Daniel Cohen-Or, and Kfir Aberman. 2023. P+: Extended Textual Conditioning in Text-to-Image Generation. arXiv preprint arXiv:2303.09522 (2023).
    [59]
    Cong Wang, Fan Tang, Yong Zhang, Tieru Wu, and Weiming Dong. 2023b. Towards harmonized regional style transfer and manipulation for facial images. Computational Visual Media 9, 2 (2023), 351--366.
    [60]
    Zongji Wang, Yunfei Liu, and Feng Lu. 2023a. Discriminative feature encoding for intrinsic image decomposition. Computational Visual Media 9, 3 (2023), 597--618.
    [61]
    Yuxin Wen, Neel Jain, John Kirchenbauer, Micah Goldblum, Jonas Geiping, and Tom Goldstein. 2023. Hard prompts made easy: Gradient-based discrete optimization for prompt tuning and discovery. arXiv preprint arXiv:2302.03668 (2023).
    [62]
    Qiucheng Wu, Yujian Liu, Handong Zhao, Ajinkya Kale, Trung Bui, Tong Yu, Zhe Lin, Yang Zhang, and Shiyu Chang. 2023. Uncovering the Disentanglement Capability in Text-to-Image Diffusion Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 1900--1910.
    [63]
    Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. 2018. AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 1316--1324.
    [64]
    Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, and Fang Wen. 2023a. Paint by Example: Exemplar-based Image Editing with Diffusion Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 18381--18391.
    [65]
    Serin Yang, Hyunmin Hwang, and Jong Chul Ye. 2023b. Zero-Shot Contrastive Loss for Text-Guided Diffusion Image Style Transfer. arXiv preprint arXiv:2303.08622 (2023).
    [66]
    Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, Ben Hutchinson, Wei Han, Zarana Parekh, Xin Li, Han Zhang, Jason Baldridge, and Yonghui Wu. 2023. Scaling Autoregressive Models for Content-Rich Text-to-Image Generation. Transactions on Machine Learning Research (2023).
    [67]
    Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang. 2021. Cross-Modal Contrastive Learning for Text-to-Image Generation. In IEEE/CVFConference on Computer Vision and Pattern Recognition (CVPR). 833--842.
    [68]
    Yuxin Zhang, Nisha Huang, Fan Tang, Haibin Huang, Chongyang Ma, Weiming Dong, and Changsheng Xu. 2023b. Inversion-Based Style Transfer with Diffusion Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10146--10156.
    [69]
    Yuxin Zhang, Fan Tang, Weiming Dong, Haibin Huang, Chongyang Ma, Tong-Yee Lee, and Changsheng Xu. 2022. Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning. In ACM SIGGRAPH 2022 Conference Proceedings. Article 12, 8 pages.
    [70]
    Yuxin Zhang, Fan Tang, Weiming Dong, Haibin Huang, Chongyang Ma, Tong-Yee Lee, and Changsheng Xu. 2023c. A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive Learning. ACM Transactions on Graphics 42, 5, Article 169 (2023), 16 pages.
    [71]
    Zhixing Zhang, Ligong Han, Arnab Ghosh, Dimitris Metaxas, and Jian Ren. 2023a. SINE: SINgle Image Editing with Text-to-Image Diffusion Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 6027--6037.
    [72]
    Minfeng Zhu, Pingbo Pan, Wei Chen, and Yi Yang. 2019. DM-GAN: Dynamic Memory Generative Adversarial Networks for Text-to-Image Synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 5802--5810.

    Cited By

    View all
    • (2024)Fast Coherent Video Style Transfer via Flow Errors ReductionApplied Sciences10.3390/app1406263014:6(2630)Online publication date: 21-Mar-2024
    • (2024)GDUI: Guided Diffusion Model for Unlabeled ImagesAlgorithms10.3390/a1703012517:3(125)Online publication date: 18-Mar-2024
    • (2023)CIRAL at FIRE 2023: Cross-Lingual Information Retrieval for African LanguagesProceedings of the 15th Annual Meeting of the Forum for Information Retrieval Evaluation10.1145/3632754.3633076(4-6)Online publication date: 15-Dec-2023
    • Show More Cited By

    Index Terms

    1. ProSpect: Prompt Spectrum for Attribute-Aware Personalization of Diffusion Models

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Graphics
      ACM Transactions on Graphics  Volume 42, Issue 6
      December 2023
      1565 pages
      ISSN:0730-0301
      EISSN:1557-7368
      DOI:10.1145/3632123
      Issue’s Table of Contents
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 05 December 2023
      Published in TOG Volume 42, Issue 6

      Check for updates

      Author Tags

      1. attribute-aware editing
      2. diffusion models
      3. image generation
      4. model personalization

      Qualifiers

      • Research-article

      Funding Sources

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)642
      • Downloads (Last 6 weeks)91

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Fast Coherent Video Style Transfer via Flow Errors ReductionApplied Sciences10.3390/app1406263014:6(2630)Online publication date: 21-Mar-2024
      • (2024)GDUI: Guided Diffusion Model for Unlabeled ImagesAlgorithms10.3390/a1703012517:3(125)Online publication date: 18-Mar-2024
      • (2023)CIRAL at FIRE 2023: Cross-Lingual Information Retrieval for African LanguagesProceedings of the 15th Annual Meeting of the Forum for Information Retrieval Evaluation10.1145/3632754.3633076(4-6)Online publication date: 15-Dec-2023
      • (2023)SDE-RAEImage and Vision Computing10.1016/j.imavis.2023.104836139:COnline publication date: 1-Nov-2023

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Full Access

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media