Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Controllable Group Choreography Using Contrastive Diffusion

Published: 05 December 2023 Publication History

Abstract

Music-driven group choreography poses a considerable challenge but holds significant potential for a wide range of industrial applications. The ability to generate synchronized and visually appealing group dance motions that are aligned with music opens up opportunities in many fields such as entertainment, advertising, and virtual performances. However, most of the recent works are not able to generate high-fidelity long-term motions, or fail to enable controllable experience. In this work, we aim to address the demand for high-quality and customizable group dance generation by effectively governing the consistency and diversity of group choreographies. In particular, we utilize a diffusion-based generative approach to enable the synthesis of flexible number of dancers and long-term group dances, while ensuring coherence to the input music. Ultimately, we introduce a Group Contrastive Diffusion (GCD) strategy to enhance the connection between dancers and their group, presenting the ability to control the consistency or diversity level of the synthesized group animation via the classifier-guidance sampling technique. Through intensive experiments and evaluation, we demonstrate the effectiveness of our approach in producing visually captivating and consistent group dance motions. The experimental results show the capability of our method to achieve the desired levels of consistency and diversity, while maintaining the overall quality of the generated group choreography.

Supplementary Material

ZIP File (papers_427s4-file4.zip)
supplemental

References

[1]
Vida Adeli, Ehsan Adeli, Ian Reid, Juan Carlos Niebles, and Hamid Rezatofighi. 2020. Socially and contextually aware human motion and pose forecasting. IEEE Robotics and Automation Letters (RA-L) (2020).
[2]
Hyemin Ahn, Jaehun Kim, Kihyun Kim, and Songhwai Oh. 2020. Generative autoregressive networks for 3d dancing move synthesis from music. IEEE Robotics and Automation Letters (RA-L) (2020).
[3]
Alexandre Alahi, Vignesh Ramanathan, and Li Fei-Fei. 2014. Socially-aware large-scale crowd forecasting. In CVPR.
[4]
Sarah Fdili Alaoui, Cyrille Henry, and Christian Jacquemin. 2014. Physical modelling for interactive installations and the performing arts. International Journal of Performance Arts and Digital Media (2014).
[5]
Omid Alemi, Jules Françoise, and Philippe Pasquier. 2017. GrooveNet: Real-time music-driven dance movement generation using artificial neural networks. Networks (2017).
[6]
Simon Alexanderson, Rajmund Nagy, Jonas Beskow, and Gustav Eje Henter. 2023. Listen, denoise, action! audio-driven motion synthesis with diffusion models. ACM Transactions on Graphics (TOG) (2023).
[7]
Sadegh Aliakbarian, Fatemeh Sadat Saleh, Mathieu Salzmann, Lars Petersson, and Stephen Gould. 2020. A stochastic conditioning scheme for diverse human motion prediction. In CVPR.
[8]
Okan Arikan and David A Forsyth. 2002. Interactive motion generation from examples. ACM Transactions on Graphics (TOG) (2002).
[9]
Andreas Aristidou, Daniel Cohen-Or, Jessica K Hodgins, Yiorgos Chrysanthou, and Ariel Shamir. 2018. Deep motifs and motion signatures. ACM Transactions on Graphics (TOG) (2018).
[10]
Andreas Aristidou, Anastasios Yiannakidis, Kfir Aberman, Daniel Cohen-Or, Ariel Shamir, and Yiorgos Chrysanthou. 2022. Rhythm is a dancer: Music-driven motion synthesis with global structure. IEEE Transactions on Visualization and Computer Graphics (TVCG) (2022).
[11]
Ho Yin Au, Jie Chen, Junkun Jiang, and Yike Guo. 2022. ChoreoGraph: Music-conditioned Automatic Dance Choreography over a Style and Tempo Consistent Dynamic Graph. In ACM International Conference on Multimedia.
[12]
Daniel Bisig. 2022. Generative Dance-a Taxonomy and Survey. In International Conference on Movement and Computing.
[13]
Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. 2019. Everybody dance now. In ICCV.
[14]
Kang Chen, Zhipeng Tan, Jin Lei, Song-Hai Zhang, Yuan-Chen Guo, Weidong Zhang, and Shi-Min Hu. 2021. Choreomaster: choreography-oriented music-driven dance synthesis. ACM Transactions on Graphics (TOG) (2021).
[15]
Wenheng Chen, He Wang, Yi Yuan, Tianjia Shao, and Kun Zhou. 2020. Dynamic future net: Diversified human motion generation. In ACM International Conference on Multimedia.
[16]
Baptiste Chopin, Hao Tang, and Mohamed Daoudi. 2023. Bipartite Graph Diffusion Model for Human Interaction Generation. arXiv (2023).
[17]
Rishabh Dabral, Muhammad Hamza Mughal, Vladislav Golyanik, and Christian Theobalt. 2023. MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis. In CVPR.
[18]
Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever. 2020. Jukebox: A generative model for music. arXiv (2020).
[19]
Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. In NeurIPS.
[20]
Rukun Fan, Songhua Xu, and Weidong Geng. 2011. Example-based automatic music-driven conventional dance motion synthesis. IEEE Transactions on Visualization and Computer Graphics (TVCG) (2011).
[21]
Bin Feng, Tenglong Ao, Zequn Liu, Wei Ju, Libin Liu, and Ming Zhang. 2023. Robust Dancer: Long-term 3D Dance Synthesis Using Unpaired Data. arXiv (2023).
[22]
Joao P Ferreira, Thiago M Coutinho, Thiago L Gomes, José F Neto, Rafael Azevedo, Renato Martins, and Erickson R Nascimento. 2021. Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio. Computers & Graphics (2021).
[23]
Bernhard Fink, Bettina Bläsing, Andrea Ravignani, and Todd K. Shackelford. 2021. Evolution and functions of human dance. Evolution and Human Behavior (2021).
[24]
Kehong Gong, Dongze Lian, Heng Chang, Chuan Guo, Xinxin Zuo, Zihang Jiang, and Xinchao Wang. 2023. TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration. In ICCV.
[25]
Wen Guo, Xiaoyu Bie, Xavier Alameda-Pineda, and Francesc Moreno-Noguer. 2022. Multi-person extreme motion prediction. In CVPR.
[26]
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NeurIPS.
[27]
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. In NeurIPS.
[28]
Jonathan Ho and Tim Salimans. 2022. Classifier-free diffusion guidance. arXiv (2022).
[29]
Fangzhou Hong, Mingyuan Zhang, Liang Pan, Zhongang Cai, Lei Yang, and Ziwei Liu. 2022. AvatarCLIP: zero-shot text-driven generation and animation of 3D avatars. ACM Transactions on Graphics (TOG) (2022).
[30]
Ruozi Huang, Huang Hu, Wei Wu, Kei Sawada, and Mi Zhang. 2020. Dance Revolution: Long Sequence Dance Generation with Music via Curriculum Learning. CoRR (2020).
[31]
Siyuan Huang, Zan Wang, Puhao Li, Baoxiong Jia, Tengyu Liu, Yixin Zhu, Wei Liang, and Song-Chun Zhu. 2023. Diffusion-based Generation, Optimization, and Planning in 3D Scenes. In CVPR.
[32]
Yin-Fu Huang and Wei-De Liu. 2021. Choreography cGAN: generating dances with music beats using conditional generative adversarial networks. Neural Computing and Applications (2021).
[33]
Manish Joshi and Sangeeta Chakrabarty. 2021. An extensive review of computational dance automation techniques and applications. Proceedings of the Royal Society A (2021).
[34]
Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In CVPR.
[35]
Pushpajit Khaire and Praveen Kumar. 2022. Deep learning and RGB-D based human action, human-human and human-object interaction recognition: A survey. Journal of Visual Communication and Image Representation (2022).
[36]
Sena Kiciroglu, Wei Wang, Mathieu Salzmann, and Pascal Fua. 2022. Long term motion prediction using keyposes. In 3DV.
[37]
Iris Kico, Nikos Grammalidis, Yiannis Christidis, and Fotis Liarokapis. 2018. Digitization and Visualization of Folk Dances in Cultural Heritage: A Review. Inventions (2018).
[38]
Jihoon Kim, Taehyun Byun, Seungyoun Shin, Jungdam Won, and Sungjoon Choi. 2022a. Conditional motion in-betweening. Pattern Recognition (2022).
[39]
Jinwoo Kim, Heeseok Oh, Seongjean Kim, Hoseok Tong, and Sanghoon Lee. 2022b. A Brand New Dance Partner: Music-Conditioned Pluralistic Dancing Controlled by Multiple Dance Genres. In CVPR.
[40]
Jae Woo Kim, Hesham Fouad, and James K Hahn. 2006. Making Them Dance. In AAAI Fall Symposium: Aurally Informed Performance.
[41]
Tae-hoon Kim, Sang Il Park, and Sung Yong Shin. 2003. Rhythmic-motion synthesis based on motion-beat analysis. ACM Transactions on Graphics (TOG) (2003).
[42]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv (2014).
[43]
Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. 2020. Diffwave: A versatile diffusion model for audio synthesis. arXiv (2020).
[44]
Lucas Kovar, Michael Gleicher, and Frédéric Pighin. 2002. Motion graphs. In SIGGRAPH.
[45]
Harold W Kuhn. 1955. The Hungarian method for the assignment problem. Naval research logistics quarterly (1955).
[46]
Nhat Le, Thang Pham, Tuong Do, Erman Tjiputra, Quang D Tran, and Anh Nguyen. 2023. Music-Driven Group Choreography. In CVPR.
[47]
Hsin-Ying Lee, Xiaodong Yang, Ming-Yu Liu, Ting-Chun Wang, Yu-Ding Lu, Ming-Hsuan Yang, and Jan Kautz. 2019. Dancing to music. In NeurIPS.
[48]
Lik-Hang Lee, Zijun Lin, Rui Hu, Zhengya Gong, Abhishek Kumar, Tangyao Li, Sijia Li, and Pan Hui. 2021. When Creators Meet the Metaverse: A Survey on Computational Arts. CoRR (2021).
[49]
Minho Lee, Kyogu Lee, and Jaeheung Park. 2013. Music similarity-based approach to generating dance motion sequence. Multimedia tools and applications (2013).
[50]
Buyu Li, Yongchi Zhao, and Lu Sheng. 2022a. DanceNet3D: Music Based Dance Generation with Parametric Motion Transformer. In AAAI.
[51]
Buyu Li, Yongchi Zhao, Shi Zhelun, and Lu Sheng. 2022b. Danceformer: Music conditioned 3d dance generation with parametric motion transformer. In Proceedings of the AAAI Conference on Artificial Intelligence.
[52]
Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Zhenyu He, and Linchao Bao. 2021a. Audio2gestures: Generating diverse gestures from speech audio with conditional variational autoencoders. In ICCV.
[53]
Ruilong Li, Shan Yang, David A Ross, and Angjoo Kanazawa. 2021b. Ai choreographer: Music conditioned 3d dance generation with aist++. In ICCV.
[54]
Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. 2015. SMPL: A Skinned Multi-Person Linear Model. ACM Transactions on Graphics (TOG) (2015).
[55]
Shitong Luo and Wei Hu. 2021. Diffusion probabilistic models for 3d point cloud generation. In CVPR.
[56]
Brian McFee, Colin Raffel, Dawen Liang, Daniel P Ellis, Matt McVicar, Eric Battenberg, and Oriol Nieto. 2015. librosa: Audio and music signal analysis in python. In Python in science conference.
[57]
Dushyant Mehta, Oleksandr Sotnychenko, Franziska Mueller, Weipeng Xu, Srinath Sridhar, Gerard Pons-Moll, and Christian Theobalt. 2018. Single-Shot Multi-person 3D Pose Estimation from Monocular RGB. In 3DV.
[58]
Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. 2022b. Point-E: A System for Generating 3D Point Clouds from Complex Prompts. arXiv (2022).
[59]
Alexander Quinn Nichol and Prafulla Dhariwal. 2021. Improved denoising diffusion probabilistic models. In ICML.
[60]
Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. 2022a. GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. In ICML.
[61]
Ferda Ofli, Engin Erzin, Yücel Yemez, and A Murat Tekalp. 2011. Learn2dance: Learning statistical music-to-dance mappings for choreography synthesis. IEEE Transactions on Multimedia (TMM) (2011).
[62]
Kensuke Onuma, Christos Faloutsos, and Jessica K Hodgins. 2008. FMDistance: A Fast and Effective Distance Function for Motion Capture Data. In Eurographics.
[63]
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv (2018).
[64]
Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. 2018. Film: Visual reasoning with a general conditioning layer. In AAAI.
[65]
Guillermo Valle Perez, Jonas Beskow, Gustav Henter, Andre Holzapfel, Pierre-Yves Oudeyer, and Simon Alexanderson. 2021. Transflower: probabilistic autoregressive dance generation with multimodal attention. ACM Transactions on Graphics (TOG) (2021).
[66]
Mathis Petrovich, Michael J Black, and Gül Varol. 2021. Action-conditioned 3D human motion synthesis with transformer VAE. In ICCV.
[67]
Maggi Phillips, Cheryl Stock, and Kim Vincs. 2009. Dancing between diversity and consistency: Refining assessment in postgraduate degrees in dance. Western Australian Academy of Performing Arts, Edith Cowan University.
[68]
Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. 2022. Dreamfusion: Text-to-3d using 2d diffusion. arXiv (2022).
[69]
Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnima Sadekova, and Mikhail Kudinov. 2021. Grad-tts: A diffusion probabilistic model for text-to-speech. In ICML.
[70]
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents. arXiv (2022).
[71]
Xuanchi Ren, Haoran Li, Zijian Huang, and Qifeng Chen. 2020. Self-supervised dance video synthesis conditioned on music. In ACMMM.
[72]
Zhiyuan Ren, Zhihong Pan, Xin Zhou, and Le Kang. 2023. Diffusion motion: Generate text-guided 3d human motion by diffusion model. In ICASSP.
[73]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In CVPR.
[74]
Alla Safonova and Jessica K Hodgins. 2007. Construction and optimal search of interpolated motion graphs. In SIGGRAPH.
[75]
Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Raphael Gontijo-Lopes, Burcu Karagol Ayan, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. In NeurIPS.
[76]
Junyoung Seo, Wooseok Jang, Min-Seop Kwak, Jaehoon Ko, Hyeonsu Kim, Junho Kim, Jin-Hwa Kim, Jiyoung Lee, and Seungryong Kim. 2023. Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation. arXiv (2023).
[77]
Yonatan Shafir, Guy Tevet, Roy Kapon, and Amit H Bermano. 2023. Human motion diffusion as a generative prior. arXiv (2023).
[78]
Nicholas Sharp, Souhaib Attaiki, Keenan Crane, and Maks Ovsjanikov. 2022. Diffusion-net: Discretization agnostic learning on surfaces. ACM Transactions on Graphics (TOG) (2022).
[79]
Jianxing Shi. 2021. Application of 3D computer aided system in dance creation and learning. In International Conference on Machine Learning and Big Data Analytics for IoT Security and Privacy.
[80]
Takaaki Shiratori, Atsushi Nakazawa, and Katsushi Ikeuchi. 2006. Dancing-to-music character animation. Computer Graphics Forum (2006).
[81]
Li Siyao, Weijiang Yu, Tianpei Gu, Chunze Lin, Quan Wang, Chen Qian, Chen Change Loy, and Ziwei Liu. 2022. Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory. In CVPR.
[82]
Asako Soga, Bin Umino, and Jeffrey Scott Longstaff. 2005. Automatic composition of ballet sequences using a 3D motion archive. In 1st South-Eastern European Digitization Initiative Conference.
[83]
Jiaming Song, Chenlin Meng, and Stefano Ermon. 2021. Denoising Diffusion Implicit Models. In ICLR.
[84]
Ziyang Song, Dongliang Wang, Nan Jiang, Zhicheng Fang, Chenjing Ding, Weihao Gan, and Wei Wu. 2022. Actformer: A gan transformer framework towards general action-conditioned 3d human motion generation. arXiv (2022).
[85]
Alexandros Stergiou and Ronald Poppe. 2019. Analyzing human-human interactions: A survey. Computer Vision and Image Understanding (2019).
[86]
Guofei Sun, Yongkang Wong, Zhiyong Cheng, Mohan S. Kankanhalli, Weidong Geng, and Xiangdong Li. 2020. DeepDance: Music-to-Dance Motion Choreography With Adversarial Learning. IEEE Transactions on Multimedia (TMM) (2020).
[87]
Jiangxin Sun, Chunyu Wang, Huang Hu, Hanjiang Lai, Zhi Jin, and Jian-Fang Hu. 2022. You Never Stop Dancing: Non-freezing Dance Generation via Bank-constrained Manifold Projection. In NeurIPS.
[88]
Taoran Tang, Jia Jia, and Hanyang Mao. 2018. Dance with melody: An lstm-autoencoder approach to music-oriented dance synthesis. In ACMMM.
[89]
Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, and Amit H Bermano. 2023. Human motion diffusion model. In ICLR.
[90]
Jonathan Tseng, Rodrigo Castellon, and C Karen Liu. 2023. EDGE: Editable Dance Generation From Music. In CVPR.
[91]
Shuhei Tsuchida, Satoru Fukayama, Masahiro Hamasaki, and Masataka Goto. 2019. AIST Dance Video Database: Multi-genre, Multi-dancer, and Multi-camera Database for Dance Information Processing. In ISMIR.
[92]
Anwaar Ulhaq, Naveed Akhtar, and Ganna Pogrebna. 2022. Efficient Diffusion Models for Vision: A Survey. arXiv (2022).
[93]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.
[94]
Jiashun Wang, Huazhe Xu, Medhini Narasimhan, and Xiaolong Wang. 2021. Multi-Person 3D Motion Prediction with Multi-Range Transformers. In NeurIPS.
[95]
Zixuan Wang, Jia Jia, Haozhe Wu, Junliang Xing, Jinghe Cai, Fanbo Meng, Guowen Chen, and Yanfeng Wang. 2022. Groupdancer: Music to multi-people dance synthesis with style collaboration. In ACM International Conference on Multimedia.
[96]
Alexandra Willis, Nathalia Gjersoe, Catriona Havard, Jon Kerridge, and Robert Kukla. 2004. Human movement behaviour in urban spaces: Implications for the design and modelling of effective pedestrian environments. Environment and Planning B: Planning and Design (2004).
[97]
Jianfeng Xiang, Jiaolong Yang, Binbin Huang, and Xin Tong. 2023. 3D-aware Image Generation using 2D Diffusion Models. arXiv (2023).
[98]
Nelson Yalta, Shinji Watanabe, Kazuhiro Nakadai, and Tetsuya Ogata. 2019. Weakly-supervised deep recurrent neural networks for basic dance step generation. In IJCNN.
[99]
Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Yingxia Shao, Wentao Zhang, Bin Cui, and Ming-Hsuan Yang. 2022. Diffusion models: A comprehensive survey of methods and applications. arXiv (2022).
[100]
Zijie Ye, Haozhe Wu, Jia Jia, Yaohua Bu, Wei Chen, Fanbo Meng, and Yanfeng Wang. 2020. Choreonet: Towards music to dance synthesis with choreographic action unit. In ACMMM.
[101]
Wenjie Yin, Hang Yin, Kim Baraka, Danica Kragic, and Mårten Björkman. 2022. Dance Style Transfer with Cross-modal Transformer. arXiv (2022).
[102]
Xiaohui Zeng, Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, and Karsten Kreis. 2022. LION: Latent Point Diffusion Models for 3D Shape Generation. arXiv (2022).
[103]
Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. 2022. MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model. arXiv (2022).
[104]
Li Zhou and Yan Luo. 2022. A Spatio-temporal Learning for Music Conditioned Dance Generation. In International Conference on Multimodal Interaction.
[105]
Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, and Hao Li. 2019a. On the continuity of rotation representations in neural networks. In CVPR.
[106]
Yipin Zhou, Zhaowen Wang, Chen Fang, Trung Bui, and Tamara Berg. 2019b. Dance dance generation: Motion transfer for internet videos. In ICCVW.
[107]
Zixiang Zhou and Baoyuan Wang. 2023. Ude: A unified driving engine for human motion generation. In CVPR.
[108]
Ye Zhu, Kyle Olszewski, Yu Wu, Panos Achlioptas, Menglei Chai, Yan Yan, and Sergey Tulyakov. 2022. Quantized GAN for Complex Music Generation from Dance Videos. In ECCV.
[109]
Ye Zhu, Yu Wu, Kyle Olszewski, Jian Ren, Sergey Tulyakov, and Yan Yan. 2023. Discrete contrastive diffusion for cross-modal and conditional generation. In ICLR.
[110]
Wenlin Zhuang, Congyi Wang, Jinxiang Chai, Yangang Wang, Ming Shao, and Siyu Xia. 2022. Music2Dance: DanceNet for Music-Driven Dance Generation. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) (2022).

Cited By

View all
  • (2024)DanceCamAnimator: Keyframe-Based Controllable 3D Dance Camera SynthesisProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680980(10200-10209)Online publication date: 28-Oct-2024
  • (2024)Scalable Group Choreography via Variational Phase Manifold LearningComputer Vision – ECCV 202410.1007/978-3-031-72649-1_17(293-311)Online publication date: 30-Sep-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 42, Issue 6
December 2023
1565 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/3632123
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 December 2023
Published in TOG Volume 42, Issue 6

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. diffusion models
  2. group choreography animation
  3. group motion synthesis
  4. machine learning

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)184
  • Downloads (Last 6 weeks)28
Reflects downloads up to 12 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)DanceCamAnimator: Keyframe-Based Controllable 3D Dance Camera SynthesisProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680980(10200-10209)Online publication date: 28-Oct-2024
  • (2024)Scalable Group Choreography via Variational Phase Manifold LearningComputer Vision – ECCV 202410.1007/978-3-031-72649-1_17(293-311)Online publication date: 30-Sep-2024

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media