Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

CoolMoves: User Motion Accentuation in Virtual Reality

Published: 24 June 2021 Publication History

Abstract

Current Virtual Reality (VR) systems are bereft of stylization and embellishment of the user's motion - concepts that have been well explored in animations for games and movies. We present CooIMoves, a system for expressive and accentuated full-body motion synthesis of a user's virtual avatar in real-time, from the limited input cues afforded by current consumer-grade VR systems, specifically headset and hand positions. We make use of existing motion capture databases as a template motion repository to draw from. We match similar spatio-temporal motions present in the database and then interpolate between them using a weighted distance metric. Joint prediction probability is then used to temporally smooth the synthesized motion, using human motion dynamics as a prior. This allows our system to work well even with very sparse motion databases (e.g., with only 3-5 motions per action). We validate our system with four experiments: a technical evaluation of our quantitative pose reconstruction and three additional user studies to evaluate the motion quality, embodiment and agency.

Supplementary Material

ahuja (ahuja.zip)
Supplemental movie, appendix, image and software files for, CoolMoves: User Motion Accentuation in Virtual Reality

References

[1]
2004. CMU MoCap. http://mocap.cs.cmu.edu/.
[2]
2018. RootMotion Final IK. https://assetstore.unity.com/packages/tools/animation/final-ik-14290.
[3]
2020. FIFA2020. https://www.ea.com/en-au/games/fifa.
[4]
2020. HTC Vive. https://www.vive.com/.
[5]
2020. Microsoft Kinect. https://azure.microsoft.com/en-us/services/kinect-dk/.
[6]
2020. Microsoft Mixed Reality. https://www.microsoft.com/en-us/windows/windows-mixed-reality/.
[7]
2020. Motion Capture - Meta Motion sells Motion Capture Hardware and Software. https://metamotion.com/
[8]
2020. NBA2K. https://nba.2k.com/2k20/en-US/.
[9]
2020. NintendoSwitch. https://www.nintendo.com/switch/.
[10]
2020. Oculus. https://www.oculus.com/.
[11]
2020. Optitrack. https://www.optitrack.com/.
[12]
2020. Vicon. https://www.vicon.com/.
[13]
Kfir Aberman, Yijia Weng, Dani Lischinski, Daniel Cohen-Or, and Baoquan Chen. 2020. Unpaired Motion Style Transfer from Video to Animation. arXiv preprint arXiv:2005.05751 (2020).
[14]
Parastoo Abtahi, Mar Gonzalez-Franco, Eyal Ofek, and Anthony Steed. 2019. I'm a Giant: Walking in Large Virtual Environments at High Speed Gains. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 522.
[15]
Michael Adjeisah, Yi Yang, and Lian Li. 2015. Joint Filtering: Enhancing gesture and mouse movement in Microsoft Kinect application. In 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). IEEE, 2528--2532.
[16]
Karan Ahuja Mayank Goel, and Chris Harrison. 2020. BodySLAM: Opportunistic User Digitization in Multi-User AR/VR Experiences. In Symposium on Spatial User Interaction. 1--8.
[17]
Karan Ahuja, Chris Harrison, Mayank Goel, and Robert Xiao. 2019. MeCap: Whole-Body Digitization for Low-Cost VR/AR Headsets. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 453--462.
[18]
Karan Ahuja, Rahul Islam, Varun Parashar, Kuntal Dey, Chris Harrison, and Mayank Goel. 2018. Eyespyvr: Interactive eye sensing using off-the-shelf, smartphone-based vr headsets. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 2 (2018), 1--10.
[19]
Naomi S Altman. 1992. An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician 46, 3 (1992), 175--185.
[20]
Deepali Aneja, Alex Colburn, Gary Faigin, Linda Shapiro, and Barbara Mones. 2016. Modeling stylized character expressions via deep learning. In Asian conference on computer vision. Springer, 136--153.
[21]
Ahmed Shamsul Arefin, Carlos Riveros, Regina Berretta, and Pablo Moscato. 2012. Gpu-fs-knn: A software tool for fast and scalable knn computation using gpus. PloS one 7, 8 (2012).
[22]
Mahdi Azmandian, Mark Hancock, Hrvoje Benko, Eyal Ofek, and Andrew D Wilson. 2016. Haptic retargeting: Dynamic repurposing of passive haptics for enhanced virtual reality experiences. In Proceedings of the 2016 chi conference on human factors in computing systems. ACM, 1968--1979.
[23]
Mayra D Barrera Machuca, Paul Asente, Jingwan Lu, Byungmoon Kim, and Wolfgang Stuerzlinger. 2017. Multiplanes: Assisted freehand VR drawing. In Adjunct Publication of the 30th Annual ACM Symposium on User Interface Software and Technology. 1--3.
[24]
Hrvoje Benko, Edward W Ishak, and Steven Feiner. 2004. Collaborative mixed reality visualization of an archaeological excavation. In Third IEEE and ACM International Symposium on Mixed and Augmented Reality. IEEE, 132--140.
[25]
Uttaran Bhattacharya, Nicholas Rewkowski, Pooja Guhan, Niall L Williams, Trisha Mittal, Aniket Bera, and Dinesh Manocha. 2020. Generating Emotive Gaits for Virtual Agents Using Affect-Based Autoregression. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 24--35.
[26]
Olaf Blanke and Thomas Metzinger. 2009. Full-body illusions and minimal phenomenal selfhood. Trends in cognitive sciences 13, 1 (2009), 7--13.
[27]
Matthew Botvinick and Jonathan Cohen. 1998. Rubber hands 'feel'touch that eyes see. Nature 391, 6669 (1998), 756.
[28]
Sidney Bovet, Henrique Galvan Debarba, Bruno Herbelin, Eray Molla, and Ronan Boulic. 2018. The critical role of self-contact for embodiment in virtual reality. IEEE Transactions on Visualization and Computer Graphics 24, 4 (2018), 1428--1436.
[29]
Jerry Brunner. 2019. Repeated measurement analysis of binary responses. http://www.utstat.toronto.edu/~brunner/workshops/mixed/
[30]
Sonya Cates and Randall Davis. 2004. New approach to early sketch processing. Making Pen-Based Interaction Intelligent and Natural (2004), 29--34.
[31]
Izadi S. Chen, A. and A. Fitzgibbon. 2012. Kinêtre: animating the world with the human body. In Proceedings of the ACM symposium on User interface software and technology (UIST 2012).
[32]
Lung-Pan Cheng, Eyal Ofek, Christian Holz, Hrvoje Benko, and Andrew D Wilson. 2017. Sparse haptic proxy: Touch feedback in virtual environments using a general passive prop. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 3718--3728.
[33]
Lung-Pan Cheng, Eyal Ofek, Christian Holz, and Andrew D Wilson. 2019. VRoamer: generating on-the-fly VR experiences while walking inside large, unknown real-world building environments. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 359--366.
[34]
Worawat Choensawat, Sachie Takahashi, Minako Nakamura, Woong Choi, and Kozaburo Hachimura. 2010. Description and reproduction of stylized traditional dance body motion by using labanotation. Transactions of the Virtual Reality Society of Japan 15, 3 (2010), 379--388.
[35]
Brian J Cohn, Antonella Maselli, Eyal Ofek, and Mar Gonzalez Franco. 2020. SnapMove: Movement Projection Mapping in Virtual Reality. In AIVR 2020. IEEE.
[36]
Marco da Silva, Yeuhi Abe, and Jovan Popović. 2008. Interactive simulation of stylized human locomotion. In ACM SIGGRAPH 2008 papers. 1--10.
[37]
Brian Day, Elham Ebrahimi, Leah S Hartman, Christopher C Pagano, Andrew C Robb, and Sabarish V Babu. 2019. Examining the effects of altered avatars on perception-action in virtual reality. Journal of Experimental Psychology: Applied 25, 1 (2019), 1.
[38]
Ruta Desai, Fraser Anderson, Justin Matejka, Stelian Coros, James McCann, George Fitzmaurice, and Tovi Grossman. 2019. Geppetto: Enabling Semantic Design of Expressive Robot Behaviors. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 369.
[39]
Tiare Feuchtner and Jörg Müller. 2018. Ownershift: Facilitating Overhead Interaction in Virtual Reality with an Ownership-Preserving Hand Space Shift. In The 31st Annual ACM Symposium on User Interface Software and Technology. ACM, 31--43.
[40]
Shaun Gallagher. 2000. Philosophical conceptions of the self: implications for cognitive science. Trends in cognitive sciences 4, 1 (2000), 14--21.
[41]
Michael J Gielniak, C Karen Liu, and Andrea L Thomaz. 2010. Stylized motion generalization through adaptation of velocity profiles. In 19th International Symposium in Robot and Human Interactive Communication. IEEE, 304--309.
[42]
Mar Gonzalez-Franco, Brian Cohn, Eyal Ofek, Dalila Burin, and Antonella Maselli. 2020. The self-avatar follower effect in virtual reality. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 18--25.
[43]
Mar Gonzalez-Franco and Tabitha C. Peck. 2018. Avatar Embodiment. Towards a Standardized Questionnaire. Frontiers in Robotics and AI 5 (June 2018), 74. https://doi.org/10.3389/frobt.2018.00074
[44]
Mar Gonzalez-Franco, Daniel Perez-Marcos, Bernhard Spanlang, and Mel Slater. 2010. The contribution of real-time mirror reflections of motor actions on virtual body ownership in an immersive virtual environment. In 2010 IEEE virtual reality conference (VR). IEEE, 111--114.
[45]
Geoffrey Gorisse, Olivier Christmann, Etienne Armand Amato, and Simon Richir. 2017. First-and Third-Person Perspectives in immersive Virtual environments: Presence and Performance analysis of embodied Users. Frontiers in Robotics and AI 4 (2017), 33.
[46]
Sehoon Ha, Yunfei Bai, and C Karen Liu. 2011. Human motion reconstruction from force sensors. In Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. 129--138.
[47]
Paul Heidicker, Eike Langbehn, and Frank Steinicke. 2017. Influence of avatar appearance on presence in social VR. In 2017 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 233--234.
[48]
Daniel Holden, Ikhsanul Habibie, Ikuo Kusajima, and Taku Komura. 2017. Fast neural style transfer for motion data. IEEE computer graphics and applications 37, 4 (2017), 42--49.
[49]
David Antonio Gomez Jauregui, Ferran Argelaguet, Anne-Helene Olivier, Maud Marchal, Franck Multon, and Anatole Lecuyer. 2014. Toward" pseudo-haptic avatars": Modifying the visual animation of self-avatar can simulate the perception of weight lifting. IEEE transactions on visualization and computer graphics 20, 4 (2014), 654--661.
[50]
Marc Jeannerod. 2002. The mechanism of self-recognition in humans., 15 pages.
[51]
Fan Jiang, Xubo Yang, and Lele Feng. 2016. Real-time full-body motion reconstruction and recognition for off-the-shelf VR devices. In Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry-Volume 1. ACM, 309--318.
[52]
Shunichi Kasahara, Keina Konno, Richi Owaki, Tsubasa Nishi, Akiko Takeshita, Takayuki Ito, Shoko Kasuga, and Junichi Ushiba. 2017. Malleable embodiment: Changing sense of embodiment by spatial-temporal deformation of virtual human body. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 6438--6448.
[53]
Grimes G. Kendall, A. and R. Cipolla. 2015. PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization. In Proceedings of the International Conference on Computer Vision (ICCV 2015).
[54]
Konstantina Kilteni, Raphaela Groten, and Mel Slater. 2012. The sense of embodiment in virtual reality. Presence: Teleoperators and Virtual Environments 21, 4 (2012), 373--387.
[55]
Elena Kokkinara and Mel Slater. 2014. Measuring the effects through time of the influence of visuomotor and visuotactile synchronous stimulation on a virtual body ownership illusion. Perception 43, 1 (2014), 43--58.
[56]
Elena Kokkinara, Mel Slater, and Joan López-Moliner. 2015. The effects of visuomotor calibration to the perceived space and body, through embodiment in immersive virtual reality. ACM Transactions on Applied Perception (TAP) 13, 1 (2015), 3.
[57]
Anya Kolesnichenko, Joshua McVeigh-Schultz, and Katherine Isbister. 2019. Understanding emerging design practices for avatar systems in the commercial social VR ecology. In Proceedings of the 2019 on Designing Interactive Systems Conference. 241--252.
[58]
Lucas Kovar, Michael Gleicher, and Frédéric Pighin. 2008. Motion graphs. In ACM SIGGRAPH 2008 classes. ACM, 51.
[59]
Huajun Liu, Xiaolin Wei, Jinxiang Chai, Inwoo Ha, and Taehyun Rhee. 2011. Realtime human motion control with a small number of inertial sensors. In Symposium on interactive 3D graphics and games. 133--140.
[60]
Matthew R Longo, Friederike Schüür, Marjolein PM Kammers, Manos Tsakiris, and Patrick Haggard. 2008. What is embodiment? A psychometric approach. Cognition 107, 3 (2008), 978--998.
[61]
Paterson M. H. Ma, Y. and E. Pollick. 2006. A motion capture library for the study of identity, gender, and emotion perception from biological motion. 38 (2006), 7291----7299.
[62]
Antonella Maselli and Mel Slater. 2013. The building blocks of the full body ownership illusion. Frontiers in human neuroscience 7 (2013), 83.
[63]
Antonella Maselli and Mel Slater. 2014. Sliding perspectives: dissociating ownership from self-location during full body illusions in virtual reality. Frontiers in human neuroscience 8 (2014), 693.
[64]
Alberto Menache. 2000. Understanding motion capture for computer animation and video games. Morgan kaufmann.
[65]
Microsoft Corporation. 2010. Microsoft Kinect Games. Retrieved 2021 from https://en.wikipedia.org/wiki/Category:Kinect_games
[66]
Kulpa R. Multon, F. and B. Bideau. 2008. Mkm: Aglobal framework for animating humans in virtual reality applications. Presence: Teleoper. Virtual Environ. 17 (2008), 17--28.
[67]
Xuan Thanh Nguyen, Thi Duyen Ngo, and Thanh Ha Le. 2019. A Spatial-temporal 3D Human Pose Reconstruction Framework. arXiv preprint arXiv:1901.02529 (2019).
[68]
TechCrunch Oculus. 2020. TechCrunch - Oculus. Retrieved 2020 from https://techcrunch.com/2016/10/06/facebook-social-vr/
[69]
Mathias Parger, Joerg H Mueller, Dieter Schmalstieg, and Markus Steinberger. 2018. Human upper-body inverse kinematics for increased embodiment in consumer-grade virtual reality. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology. ACM, 23.
[70]
Tabitha C Peck and Mar Gonzalez-Franco. 2020. Avatar embodiment. a standardized questionnaire. Frontiers in Virtual Reality 1 (2020), 44.
[71]
Ivan Poupyrev, Mark Billinghurst, Suzanne Weghorst, and Tadao Ichikawa. 1996. The go-go interaction technique: non-linear mapping for direct manipulation in VR. In ACM Symposium on User Interface Software and Technology. Citeseer, 79--80.
[72]
Katja Rogers, Jana Funke, Julian Frommel, Sven Stamm, and Michael Weber. 2019. Exploring interaction fidelity in virtual reality: Object manipulation and whole-body movements. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1--14.
[73]
Enrique Rosales, Jafet Rodriguez, and Alla Sheffer. 2019. SurfaceBrush: from virtual reality drawings to manifold surfaces. arXiv preprint arXiv:1904.12297 (2019).
[74]
Charles Rose, Michael F Cohen, and Bobby Bodenheimer. 1998. Verbs and adverbs: Multidimensional motion interpolation. IEEE Computer Graphics and Applications 18, 5 (1998), 32--40.
[75]
Philip Sedgwick. 2012. Pearson's correlation coefficient. Bmj 345 (2012).
[76]
Park H. S. Sigal Y. Leonid abd Sheikh Shiratori, T. and K. J. Hodgins. 2011. Motion capture from body-mounted cameras. In SIGGRAPH 2011. 7291----7299.
[77]
m. Slater. 2017. Implicit Learning Through Embodiment in Immersive Virtual Reality. In In D. Liu, D., Dede, C., Huang, R. and Richards, J. eds., Virtual, Augmented, and Mixed Realities in Education. 19--34.
[78]
Ronit Slyper, Guy Hoffman, and Ariel Shamir. 2015. Mirror puppeteering: Animating toy robots in front of a webcam. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction. ACM, 241--248.
[79]
Bernhard Spanlang, Jean-Marie Normand, David Borland, Konstantina Kilteni, Elias Giannopoulos, Ausiàs Pomés, Mar González-Franco, Daniel Perez-Marcos, Jorge Arroyo-Palacios, Xavi Navarro Muncunill, et al. 2014. How to build an embodiment lab: achieving body representation illusions in virtual reality. Frontiers in Robotics and AI 1 (2014), 9.
[80]
Zhipeng Tan, Yuning Hu, and Kun Xu. 2017. Virtual Reality Based Immersive Telepresence System for Remote Conversation and Collaboration. In International Workshop on Next Generation Computer Animation Techniques. Springer, 234--247.
[81]
Manos Tsakiris and Patrick Haggard. 2005. The rubber hand illusion revisited: visuotactile integration and self-attribution. Journal of Experimental Psychology: Human Perception and Performance 31, 1 (2005), 80.
[82]
Zhifu Wang, Xianfeng Yuan, and Chengjin Zhang. 2019. Design and Implementation of Humanoid Robot Behavior Imitation System Based on Skeleton Tracking. In 2019 Chinese Control And Decision Conference (CCDC). IEEE, 3541--3546.
[83]
Johann Wentzel, Greg d'Eon, and Daniel Vogel. 2020. Improving Virtual Reality Ergonomics Through Reach-Bounded Non-Linear Input Amplification. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--12.
[84]
Johann Wentzel, Greg d'Eon, and Daniel Vogel. 2020. Improving Virtual Reality Ergonomics through Reach-Bounded Non-Linear Input Amplification. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, Article 558, 12 pages. https://doi.org/10.1145/3313831.3376687
[85]
Shihong Xia, Congyi Wang, Jinxiang Chai, and Jessica Hodgins. 2015. Realtime style transfer for unlabeled heterogeneous human motion. ACM Transactions on Graphics (TOG) 34, 4 (2015), 119.
[86]
Jackie Yang, Christian Holz, Eyal Ofek, and Andrew D Wilson. 2019. Dreamwalker: Substituting real-world walking experiences with a virtual reality. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 1093--1107.
[87]
M Ersin Yumer and Niloy J Mitra. 2016. Spectral style transfer for human motion between independent actions. ACM Transactions on Graphics (TOG) 35, 4 (2016), 137.

Cited By

View all
  • (2024)Full-body pose reconstruction and correction in virtual reality for rehabilitation trainingFrontiers in Neuroscience10.3389/fnins.2024.138874218Online publication date: 4-Apr-2024
  • (2024)Dog Code: Human to Quadruped Embodiment using Shared CodebooksProceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games10.1145/3677388.3696339(1-11)Online publication date: 21-Nov-2024
  • (2024)Realistic Full-Body Motion Generation from Sparse Tracking with State Space ModelProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681013(4024-4033)Online publication date: 28-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 5, Issue 2
June 2021
932 pages
EISSN:2474-9567
DOI:10.1145/3472726
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 June 2021
Published in IMWUT Volume 5, Issue 2

Check for updates

Author Tags

  1. Motion Embellishment
  2. Motion Stylization
  3. Pose Tracking
  4. Virtual Reality

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)444
  • Downloads (Last 6 weeks)72
Reflects downloads up to 13 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Full-body pose reconstruction and correction in virtual reality for rehabilitation trainingFrontiers in Neuroscience10.3389/fnins.2024.138874218Online publication date: 4-Apr-2024
  • (2024)Dog Code: Human to Quadruped Embodiment using Shared CodebooksProceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games10.1145/3677388.3696339(1-11)Online publication date: 21-Nov-2024
  • (2024)Realistic Full-Body Motion Generation from Sparse Tracking with State Space ModelProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681013(4024-4033)Online publication date: 28-Oct-2024
  • (2024)Reframe: Recording and Editing Character Motion in Virtual RealityACM SIGGRAPH 2024 Immersive Pavilion10.1145/3641521.3664413(1-2)Online publication date: 27-Jul-2024
  • (2024)Ultra Inertial Poser: Scalable Motion Capture and Tracking from Sparse Inertial Sensors and Ultra-Wideband RangingACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657465(1-11)Online publication date: 13-Jul-2024
  • (2024)Physical Non-inertial Poser (PNP): Modeling Non-inertial Effects in Sparse-inertial Human Motion CaptureACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657436(1-11)Online publication date: 13-Jul-2024
  • (2024)Spatial-Temporal Masked Autoencoder for Multi-Device Wearable Human Activity RecognitionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314157:4(1-25)Online publication date: 12-Jan-2024
  • (2024)FingerPuppet: Finger-Walking Performance-based Puppetry for Human AvatarExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650840(1-6)Online publication date: 11-May-2024
  • (2024)TimeTunnel: Integrating Spatial and Temporal Motion Editing for Character Animation in Virtual RealityProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641927(1-17)Online publication date: 11-May-2024
  • (2024)DivaTrack: Diverse Bodies and Motions from Acceleration‐Enhanced Three‐Point TrackersComputer Graphics Forum10.1111/cgf.1505743:2Online publication date: 27-Apr-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media