Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3586183.3606813acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

Wakey-Wakey: Animate Text by Mimicking Characters in a GIF

Published: 29 October 2023 Publication History
  • Get Citation Alerts
  • Abstract

    With appealing visual effects, kinetic typography (animated text) has prevailed in movies, advertisements, and social media. However, it remains challenging and time-consuming to craft its animation scheme. We propose an automatic framework to transfer the animation scheme of a rigid body on a given meme GIF to text in vector format. First, the trajectories of key points on the GIF anchor are extracted and mapped to the text’s control points based on local affine transformation. Then the temporal positions of the control points are optimized to maintain the text topology. We also develop an authoring tool that allows intuitive human control in the generation process. A questionnaire study provides evidence that the output results are aesthetically pleasing and well preserve the animation patterns in the original GIF, where participants were impressed by a similar emotional semantics of the original GIF. In addition, we evaluate the utility and effectiveness of our approach through a workshop with general users and designers.

    Supplementary Material

    ZIP File (3606813.zip)
    Supplemental File

    References

    [1]
    Adele. 2012. Adele - Skyfall (Official Lyric Video). Retrieved May 30, 2023 from https://youtu.be/DeumyOzKqgI 1:37–1:53.
    [2]
    Adobe Inc.2023. After Effects. Retrieved Mar 20, 2023 from https://www.adobe.com/products/aftereffects.html
    [3]
    Toshiki Aoki, Rintaro Chujo, Katsufumi Matsui, Saemi Choi, and Ari Hautasaari. 2022. EmoBalloon–Conveying Emotional Arousal in Text Chats with Speech Balloons. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI). ACM, New York, NY, USA, Article 527, 16 pages. https://doi.org/10.1145/3491102.3501920
    [4]
    Apple Inc.2023. Motion. Retrieved Mar 20, 2023 from https://www.apple.com/final-cut-pro/motion/
    [5]
    Rahul Arora, Rubaiat Habib Kazi, Danny M Kaufman, Wilmot Li, and Karan Singh. 2019. Magicalhands: Mid-Air Hand Gestures for Animating in VR. In Proceedings of the ACM Annual Symposium on User Interface Software and Technology (UIST). ACM, New York, NY, USA, 463–477. https://doi.org/10.1145/3332165.3347942
    [6]
    Alberto Betella and Paul FMJ Verschure. 2016. The Affective Slider: A Digital Self-assessment Scale for the Measurement of Human Emotions. PloS one 11, 2, Article e0148037 (2016), 11 pages. https://doi.org/10.1371/journal.pone.0148037
    [7]
    George Borzyskowski. 2004. Animated Text: More than Meets the Eye?. In Proceedings of the ASCILITE Conference. 141–144. https://www.ascilite.org/conferences/perth04/procs/pdf/borzyskowski.pdf
    [8]
    Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. 2019. Everybody Dance Now. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, Piscataway, NJ, USA, 5933–5942. https://doi.org/10.1109/ICCV.2019.00603
    [9]
    Bay-Wei Chang and David Ungar. 1993. Animation: From Cartoons to the User Interface. In Proceedings of the ACM Annual Symposium on User Interface Software and Technology (UIST). ACM, New York, NY, USA, 45–55. https://doi.org/10.1145/168642.168647
    [10]
    Yung-Yu Chuang, Dan B Goldman, Ke Colin Zheng, Brian Curless, David H. Salesin, and Richard Szeliski. 2005. Animating Pictures with Stochastic Motion Textures. ACM Transactions on Graphics 24, 3 (2005), 853–860. https://doi.org/10.1145/1073204.1073273
    [11]
    O. Conolly. 2003. Aesthetic Principles. The British Journal of Aesthetics 43 (04 2003), 114–125. https://doi.org/10.1093/bjaesthetics/43.2.114
    [12]
    Mathieu Desbrun, Mark Meyer, Peter Schröder, and Alan H Barr. 1999. Implicit Fairing of Irregular Meshes Using Diffusion and Curvature Flow. In Proceedings of the Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH). ACM, New York, NY, USA, 317–324. https://doi.org/10.1145/311535.311576
    [13]
    Laura Devendorf and Kimiko Ryokai. 2013. AnyType: Provoking Reflection and Exploration with Aesthetic Interaction. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI). ACM, New York, NY, USA, 1041–1050. https://doi.org/10.1145/2470654.2466133
    [14]
    Marek Dvorožňák, Pierre Bénard, Pascal Barla, Oliver Wang, and Daniel Sýkora. 2017. Example-Based Expressive Animation of 2D Rigid Bodies. ACM Transactions on Graphics 36, 4, Article 127 (2017), 10 pages. https://doi.org/10.1145/3072959.3073611
    [15]
    Paul Ekman. 1999. Basic Emotions. Handbook of Cognition and Emotion 98, 45–60 (1999), 16. https://doi.org/10.1002/0470013494.ch3
    [16]
    Shannon Ford, Jodi Forlizzi, and Suguru Ishizaki. 1997. Kinetic Typography: Issues in Time-Based Presentation of Text. In Extended Abstracts on Human Factors in Computing Systems (CHIEA). ACM, New York, NY, USA, 269–270. https://doi.org/10.1145/1120212.1120387
    [17]
    Jodi Forlizzi, Johnny Lee, and Scott Hudson. 2003. The Kinedit System: Affective Messages Using Dynamic Texts. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI). ACM, New York, NY, USA, 377–384. https://doi.org/10.1145/642611.642677
    [18]
    Leon A Gatys, Alexander S Ecker, and Matthias Bethge. 2016. Image Style Transfer Using Convolutional Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Piscataway, NJ, USA, 2414–2423. https://doi.org/10.1109/CVPR.2016.265
    [19]
    Weston Gaylord, Vivian Hare, and Ashley Ngu. 2015. Adding Body Motion and Intonation to Instant Messaging with Animation. In Adjunct Proceedings of the ACM Symposium on User Interface Software and Technology. ACM, New York, NY, USA, 105–106. https://doi.org/10.1145/2815585.2815741
    [20]
    Weston Gaylord, Vivian Hare, and Ashley Ngu. 2015. Adding Body Motion and Intonation to Instant Messaging with Animation. In Adjunct Proceedings of the ACM Symposium on User Interface Software & Technology (UIST Adjunct). ACM, New York, NY, USA, 105–106. https://doi.org/10.1145/2815585.2815741
    [21]
    Tavi Halperin, Hanit Hakim, Orestis Vantzos, Gershon Hochman, Netai Benaim, Lior Sassy, Michael Kupchik, Ofir Bibi, and Ohad Fried. 2021. Endless Loops: Detecting and Animating Periodic Patterns in Still Images. ACM Transactions on Graphics 40, 4, Article 142 (2021), 12 pages. https://doi.org/10.1145/3450626.3459935
    [22]
    Fa-Ting Hong, Longhao Zhang, Li Shen, and Dan Xu. 2022. Depth-Aware Generative Adversarial Network for Talking Head Video Generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Piscataway, NJ, USA, 3397–3406. https://doi.org/10.1109/CVPR52688.2022.00339
    [23]
    Shir Iluz, Yael Vinker, Amir Hertz, Daniel Berio, Daniel Cohen-Or, and Ariel Shamir. 2023. Word-As-Image for Semantic Typography. arXiv:2303.01818Accepted in ACM SIGGRAPH 2023.
    [24]
    Jun Kato, Tomoyasu Nakano, and Masataka Goto. 2015. TextAlive: Integrated Design Environment for Kinetic Typography. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI). ACM, New York, NY, USA, 3403–3412. https://doi.org/10.1145/2702123.2702140
    [25]
    Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, Shengdong Zhao, and George Fitzmaurice. 2014. Draco: Bringing Life to Illustrations with Kinetic Textures. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI). ACM, New York, NY, USA, 351–360. https://doi.org/10.1145/2556288.2556987
    [26]
    Rubaiat Habib Kazi, Tovi Grossman, Nobuyuki Umetani, and George Fitzmaurice. 2016. Motion Amplifiers: Sketching Dynamic Illustrations Using the Principles of 2D Animation. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI). ACM, New York, NY, USA, 4599–4609. https://doi.org/10.1145/2858036.2858386
    [27]
    Minhwan Kim, Kyungah Choi, and Hyeon-Jeong Suk. 2016. Yo! Enriching Emotional Quality of Single-Button Messengers through Kinetic Typography. In Proceedings of the ACM Conference on Designing Interactive Systems (DIS). ACM, New York, NY, USA, 276–280. https://doi.org/10.1145/2901790.2901835
    [28]
    Yu-Chi Lai, Bo-An Chen, Kuo-Wei Chen, Wei-Lin Si, Chih-Yuan Yao, and Eugene Zhang. 2016. Data-driven NPR Illustrations of Natural Flows in Chinese Painting. IEEE Transactions on Visualization and Computer Graphics 23, 12 (2016), 2535–2549. https://doi.org/10.1109/TVCG.2016.2622269
    [29]
    Xingyu Lan, Yang Shi, Yanqiu Wu, Xiaohan Jiao, and Nan Cao. 2021. Kineticharts: Augmenting Affective Expressiveness of Charts in Data Stories with Animation Design. IEEE Transactions on Visualization and Computer Graphics 28 (2021), 933–943. https://doi.org/10.1109/TVCG.2021.3114775
    [30]
    Daniel G. Lee, Deborah I. Fels, and John Patrick Udo. 2007. Emotive Captioning. Computers in Entertainment 5, 2, Article 11 (2007), 15 pages. https://doi.org/10.1145/1279540.1279551
    [31]
    Johnny C. Lee, Jodi Forlizzi, and Scott E. Hudson. 2002. The Kinetic Typography Engine: An Extensible System for Animating Expressive Text. In Proceedings of the ACM Annual Symposium on User Interface Software and Technology (UIST). ACM, New York, NY, USA, 81–90. https://doi.org/10.1145/571985.571997
    [32]
    Sooyeon Lim. 2022. A Study on the Interactive Expression of Human Emotions in Typography. International Journal of Advanced Culture Technology 10, 1 (2022), 122–130. https://doi.org/10.17703/IJACT.2022.10.1.122
    [33]
    lipsum.com. 1996. Lorem Ipsum. Retrieved Mar 20, 2023 from https://www.lipsum.com/
    [34]
    Sabrina Malik, Jonathan Aitken, and Judith Kelly Waalen. 2009. Communicating Emotion with Animated Text. Visual Communication 8, 4 (2009), 469–479. https://doi.org/10.1177/1470357209343375
    [35]
    Wendong Mao, Shuai Yang, Huihong Shi, Jiaying Liu, and Zhongfeng Wang. 2022. Intelligent Typography: Artistic Text Style Transfer for Complex Texture and Structure. IEEE Transactions on Multimedia (2022). https://doi.org/10.1109/TMM.2022.3209870 Early Access.
    [36]
    Yifang Men, Zhouhui Lian, Yingmin Tang, and Jianguo Xiao. 2019. DynTypo: Example-based Dynamic Text Effects Transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Piscataway, NJ, USA, 5870–5879. https://doi.org/10.1109/CVPR.2019.00602
    [37]
    Mitsuru Minakuchi and Yutaka Kidawara. 2008. Kinetic Typography for Ambient Displays. In Proceedings of the International Conference on Ubiquitous Information Management and Communication (ICUIMC). ACM, New York, NY, USA, 54–57. https://doi.org/10.1145/1352793.1352805
    [38]
    Mitsuru Minakuchi and Katsumi Tanaka. 2005. Automatic Kinetic Typography Composer. In Proceedings of the ACM Conference on Advances in Computer Entertainment Technology (ACE). ACM, New York, NY, USA, 221–224. https://doi.org/10.1145/1178477.1178512
    [39]
    Moonlab Studio Co., Ltd. 2023. Puppy Maltese. Retrieved Mar 20, 2023 from https://weibo.com/u/7776232700?tabtype=home
    [40]
    Laurence Penny. 1996. A History of TrueType. Retrieved Mar 20, 2023 from https://www.truetype-typography.com
    [41]
    Huy Quoc Phan, Hongbo Fu, and Antoni B Chan. 2015. Flexyfont: Learning Transferring Rules for Flexible Typeface Synthesis. Computer Graphics Forum 34, 7 (2015), 245–256. https://doi.org/10.1111/cgf.12763
    [42]
    Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis with Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Piscataway, NJ, USA, 10684–10695. https://doi.org/10.1109/CVPR52688.2022.01042
    [43]
    Xinhuan Shu, Aoyu Wu, Junxiu Tang, Benjamin Bach, Yingcai Wu, and Huamin Qu. 2021. What Makes a Data-GIF Understandable?IEEE Trans. Vis. Comput. Graph. 27, 02 (2021), 1492–1502. https://doi.org/10.1109/TVCG.2020.3030396
    [44]
    Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. 2019. Animating Arbitrary Objects via Deep Motion Transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Piscataway, NJ, USA, 2377–2386. https://doi.org/10.1109/CVPR.2019.00248
    [45]
    Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. 2019. First Order Motion Model for Image Animation. In Proceedings of the International Conference on Neural Information Processing Systems (NeurIPs). Curran Associates Inc., Red Hook, NY, USA, Article 641, 11 pages. https://doi.org/10.5555/3454287.3454928
    [46]
    Aliaksandr Siarohin, Oliver J Woodford, Jian Ren, Menglei Chai, and Sergey Tulyakov. 2021. Motion Representations for Articulated Animation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Piscataway, NJ, USA, 13653–13662. https://doi.org/10.1109/CVPR46437.2021.01344
    [47]
    Harrison Jesse Smith, Qingyuan Zheng, Yifei Li, Somya Jain, and Jessica K. Hodgins. 2023. A Method for Animating Children’s Drawings of the Human Figure. ACM Transactions on Graphics 42, 3, Article 32 (2023), 15 pages. https://doi.org/10.1145/3592788
    [48]
    Qingkun Su, Xue Bai, Hongbo Fu, Chiew-Lan Tai, and Jue Wang. 2018. Live Sketch: Video-Driven Dynamic Deformation of Static Drawings. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI). Association for Computing Machinery, New York, NY, USA, Article 662, 12 pages. https://doi.org/10.1145/3173574.3174236
    [49]
    Jiale Tao, Biao Wang, Borun Xu, Tiezheng Ge, Yuning Jiang, Wen Li, and Lixin Duan. 2022. Structure-Aware Motion Transfer with Deformable Anchor Model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Piscataway, NJ, USA, 3637–3646. https://doi.org/10.1109/CVPR52688.2022.00362
    [50]
    Purva Tendulkar, Kalpesh Krishna, Ramprasaath R Selvaraju, and Devi Parikh. 2019. Trick or TReAT: Thematic Reinforcement for Artistic Typography. In Proceedings of the International Conferences on Computational Creativity (ICCC). ACC, 9 pages. arXiv:1903.07820
    [51]
    The Walt Disney Company. 1940. Fantasia. Retrieved Mar 20, 2023 from https://youtu.be/r7gLlIv4ito 17:03–18:07.
    [52]
    Frank Thomas, Ollie Johnston, and Frank Thomas. 1995. The Illusion of Life: Disney Animation. Hyperion New York.
    [53]
    Fernanda B. Viégas, Martin Wattenberg, and Jonathan Feinberg. 2009. Participatory Visualization with Wordle. IEEE Transactions on Visualization and Computer Graphics 15, 6 (2009), 1137–1144. https://doi.org/10.1109/TVCG.2009.171
    [54]
    Quoc V Vy, Jorge A Mori, David W Fourney, and Deborah I Fels. 2008. EnACT: A Software Tool for Creating Animated Text captions. In Proceedings of the Internation Conference on Computers Helping People with Special Needs (ICCHP). Springer, Berlin, Heidelberg, 609–616. https://doi.org/10.1007/978-3-540-70540-6_87
    [55]
    Hua Wang, Helmut Prendinger, and Takeo Igarashi. 2004. Communicating Emotions in Online Chat Using Physiological Sensors and Animated Text. In Extended Abstracts on Human Factors in Computing Systems (CHIEA). ACM, New York, NY, USA, 1171–1174. https://doi.org/10.1145/985921.986016
    [56]
    Wenjing Wang, Jiaying Liu, Shuai Yang, and Zongming Guo. 2019. Typography with Decor: Intelligent Text Style Transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Piscataway, NJ, USA, 5889–5897. https://doi.org/10.1109/CVPR.2019.00604
    [57]
    Yun Wang, Yi Gao, Ray Huang, Weiwei Cui, Haidong Zhang, and Dongmei Zhang. 2021. Animated Presentation of Static Infographics with InfoMotion. Computer Graphics Forum 40, 3 (2021), 507–518. https://doi.org/10.1111/cgf.14325
    [58]
    Nora S. Willett, Rubaiat Habib Kazi, Michael Chen, George Fitzmaurice, Adam Finkelstein, and Tovi Grossman. 2018. A Mixed-Initiative Interface for Animating Static Pictures. In Proceedings of the ACM Annual Symposium on User Interface Software and Technology (UIST). ACM, New York, NY, USA, 649–661. https://doi.org/10.1145/3242587.3242612
    [59]
    Nora S Willett, Hijung Valentina Shin, Zeyu Jin, Wilmot Li, and Adam Finkelstein. 2020. Pose2Pose: Pose Selection and Transfer for 2D Character Animation. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI). ACM, New York, NY, USA, 88–99. https://doi.org/10.1145/3377325.3377505
    [60]
    Liwenhan Xie, Xinhuan Shu, Jeon Cheol Su, Yun Wang, Siming Chen, and Huamin Qu. 2023. Creating Emordle: Animating Word Cloud for Emotion Expression. IEEE Transactions on Visualization and Computer Graphics (2023). https://doi.org/10.1109/TVCG.2023.3286392 Early Access.
    [61]
    Jun Xing, Rubaiat Habib Kazi, Tovi Grossman, Li-Yi Wei, Jos Stam, and George Fitzmaurice. 2016. Energy-Brushes: Interactive Tools for Illustrating Stylized Elemental Dynamics. In Proceedings of the ACM Annual Symposium on User Interface Software and Technology (UIST). ACM, New York, NY, USA, 755–766. https://doi.org/10.1145/2984511.2984585
    [62]
    Borun Xu, Biao Wang, Jinhong Deng, Jiale Tao, Tiezheng Ge, Yuning Jiang, Wen Li, and Lixin Duan. 2022. Motion and Appearance Adaptation for Cross-Domain Motion Transfer. In Proceedings of the European Conference on Computer Vision (ECCV), Part XVI. Springer, Cham, Germany, 529–545. https://doi.org/10.1007/978-3-031-19787-1_40
    [63]
    Jie Xu and Craig S Kaplan. 2007. Calligraphic Packing. In Proceedings of Graphics Interface (GI). ACM, New York, NY, USA, 43–50. https://doi.org/10.1145/1268517.1268527
    [64]
    Xuemiao Xu, Liang Wan, Xiaopei Liu, Tien-Tsin Wong, Liansheng Wang, and Chi-Sing Leung. 2008. Animating Animal Motion from Still. In Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH Asia). ACM, New York, NY, USA, Article 117, 8 pages. https://doi.org/10.1145/1457515.1409070
    [65]
    Shuai Yang, Zhangyang Wang, and Jiaying Liu. 2021. Shape-Matching GAN++: Scale Controllable Dynamic Artistic Text Style Transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 7 (2021), 3807–3820. https://doi.org/10.1109/TPAMI.2021.3055211
    [66]
    Zhiquan Yeo. 2008. Emotional Instant Messaging with KIM. In Extended Abstracts on Human Factors in Computing Systems (CHIEA). ACM, New York, NY, USA, 3729–3734. https://doi.org/10.1145/1358628.1358921
    [67]
    Junsong Zhang, Yu Wang, Weiyi Xiao, and Zhenshan Luo. 2017. Synthesizing Ornamental Typefaces. Computer Graphics Forum 36, 1 (2017), 64–75. https://doi.org/10.1111/cgf.12785
    [68]
    Yang Zhou, Xintong Han, Eli Shechtman, Jose Echevarria, Evangelos Kalogerakis, and Dingzeyu Li. 2020. MakeltTalk: Speaker-Aware Talking-Head Animation. ACM Transactions on Graphics 39, 6, Article 221 (2020), 15 pages. https://doi.org/10.1145/3414685.3417774
    [69]
    Changqing Zou, Junjie Cao, Warunika Ranaweera, Ibraheem Alhashim, Ping Tan, Alla Sheffer, and Hao Zhang. 2016. Legible Compact Calligrams. ACM Transactions on Graphics 35, 4, Article 122 (2016), 12 pages. https://doi.org/10.1145/2897824.2925887
    [70]
    Ebberts + Zucker. 2023. TypeMonkey. Retrieved Mar 20, 2023 from http://aescripts.com/typemonkey/

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    UIST '23: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology
    October 2023
    1825 pages
    ISBN:9798400701320
    DOI:10.1145/3586183
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 29 October 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Animation
    2. Kinetic typography
    3. Motion transfer

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • Shanghai Municipal Science and Technology
    • Natural Science Foundation of China
    • Hong Kong Research Grants Council
    • Shanghai Municipal Science and Technology

    Conference

    UIST '23

    Acceptance Rates

    Overall Acceptance Rate 842 of 3,967 submissions, 21%

    Upcoming Conference

    UIST '24

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 371
      Total Downloads
    • Downloads (Last 12 months)371
    • Downloads (Last 6 weeks)34
    Reflects downloads up to 27 Jul 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media