Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Camera keyframing with style and control

Published: 10 December 2021 Publication History

Abstract

We present a novel technique that enables 3D artists to synthesize camera motions in virtual environments following a camera style, while enforcing user-designed camera keyframes as constraints along the sequence. To solve this constrained motion in-betweening problem, we design and train a camera motion generator from a collection of temporal cinematic features (camera and actor motions) using a conditioning on target keyframes. We further condition the generator with a style code to control how to perform the interpolation between the keyframes. Style codes are generated by training a second network that encodes different camera behaviors in a compact latent space, the camera style space. Camera behaviors are defined as temporal correlations between actor features and camera motions and can be extracted from real or synthetic film clips. We further extend the system by incorporating a fine control of camera speed and direction via a hidden state mapping technique. We evaluate our method on two aspects: i) the capacity to synthesize style-aware camera trajectories with user defined keyframes; and ii) the capacity to ensure that in-between motions still comply with the reference camera style while satisfying the keyframe constraints. As a result, our system is the first style-aware keyframe in-betweening technique for camera control that balances style-driven automation with precise and interactive control of keyframes.

Supplementary Material

MP4 File (a209-jiang.mp4)

References

[1]
Ferran Argelaguet and Carlos Andujar. 2010. Automatic Speed Graph Generation for Predefined Camera Paths. In Proceedings of the 10th International Conference on Smart Graphics. Springer-Verlag, Berlin, Heidelberg, 115--126.
[2]
Jim Blinn. 1988. Where am I? What am I looking at?(cinematography). IEEE Computer Graphics and Applications 8, 4 (1988), 76--81.
[3]
Rogerio Bonatti, Arthur Bucker, Sebastian Scherer, Mustafa Mukadam, and Jessica Hodgins. 2020a. Batteries, camera, action! Learning a semantic control space for expressive robot cinematography. arXiv:2011.10118 [cs.CV]
[4]
Rogerio Bonatti, Wenshan Wang, Cherie Ho, Aayush Ahuja, Mirko Gschwindt, Efe Camci, Erdal Kayacan, Sanjiban Choudhury, and Sebastian Scherer. 2020b. Autonomous aerial cinematography in unstructured environments with learned artistic decision-making. Journal of Field Robotics 37, 4 (2020), 606--641.
[5]
Michael Büttner and Simon Clavet. 2015. Motion Matching-The Road to Next Gen Animation. Proc. of Nucl. ai (2015).
[6]
Loïc Ciccone, Cengiz Öztireli, and Robert W Sumner. 2019. Tangent-space optimization for interactive animation control. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--10.
[7]
Petros Faloutsos, Michiel van de Panne, and Demetri Terzopoulos. 2001. Composable Controllers for Physics-Based Character Animation. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. Association for Computing Machinery, New York, NY, USA, 251--260.
[8]
Quentin Galvane, Marc Christie, Christophe Lino, and Rémi Ronfard. 2015a. Camera-on-rails: automated computation of constrained camera paths. In Proc. ACM SIGGRAPH Conf. Motion in Games.
[9]
Quentin Galvane, Marc Christie, Rémi Ronfard, Chen-Kim Lim, and Marie-Paule Cani. 2013. Steering Behaviors for Autonomous Cameras. In Proceedings of Motion on Games. Association for Computing Machinery, New York, NY, USA, 93--102.
[10]
Quentin Galvane, Rémi Ronfard, Christophe Lino, and Marc Christie. 2015b. Continuity Editing for 3D Animation. Proceedings of the AAAI Conference on Artificial Intelligence 29, 1 (Feb. 2015).
[11]
Xiao-Shan Gao, Xiao-Rong Hou, Jianliang Tang, and Hang-Fei Cheng. 2003. Complete solution classification for the perspective-three-point problem. IEEE transactions on pattern analysis and machine intelligence 25, 8 (2003), 930--943.
[12]
Mirko Gschwindt, Efe Camci, Rogerio Bonatti, Wenshan Wang, Erdal Kayacan, and Sebastian Scherer. 2019. Can a Robot Become a Movie Director? Learning Artistic Principles for Aerial Cinematography. arXiv:1904.02579 [cs.RO]
[13]
Félix G Harvey, Mike Yurick, Derek Nowrouzezahrai, and Christopher Pal. 2020. Robust motion in-betweening. ACM Trans. on Graphics 39, 4 (2020), 60--1.
[14]
Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-functioned neural networks for character control. ACM Transactions on Graphics (TOG) 36, 4 (2017), 1--13.
[15]
Axel Hösl. 2019. Understanding and designing for control in camera operation. Ph.D. Dissertation. lmu.
[16]
Chong Huang, Yuanjie Dang, Peng Chen, Xin Yang, et al. 2019a. One-Shot Imitation Filming of Human Motion Videos. arXiv preprint arXiv:1912.10609 (2019).
[17]
Chong Huang, Chuan-En Lin, Zhenyu Yang, Yan Kong, Peng Chen, Xin Yang, and Kwang-Ting Cheng. 2019b. Learning to film from professional human motion videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4244--4253.
[18]
Chong Huang, Zhenyu Yang, Yan Kong, Peng Chen, Xin Yang, and Kwang-Ting Tim Cheng. 2019c. Learning to capture a film-look video with a camera drone. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, 1871--1877.
[19]
H. Huang, D. Lischinski, Z. Hao, M. Gong, M. Christie, and D. Cohen-Or. 2016. Trip Synopsis: 60km in 60sec. Computer Graphics Forum 35, 7 (2016), 107--116.
[20]
Qingqiu Huang, Yu Xiong, Anyi Rao, Jiaze Wang, and Dahua Lin. 2020. MovieNet: A Holistic Dataset for Movie Understanding. In Proc. European Conference on Computer Vision.
[21]
Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. 1991. Adaptive mixtures of local experts. Neural computation 3, 1 (1991), 79--87.
[22]
Hongda Jiang, Bin Wang, Xi Wang, Marc Christie, and Baoquan Chen. 2020. Example-Driven Virtual Cinematography by Learning Camera Behaviors. ACM Trans. Graph. 39, 4 (2020), 14.
[23]
Diederick P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Int'l Conf. Learning Representations.
[24]
C. Lino and M. Christie. 2012. Efficient Composition for Virtual Camera Control. In Proc. ACM SIGGRAPH/Eurographics Symp. on Computer Animation.
[25]
Christophe Lino and Marc Christie. 2015. Intuitive and Efficient Camera Control with the Toric Space. ACM Trans. Graph. 34, 4, Article 82 (2015), 12 pages.
[26]
Eric Marchand and Nicolas Courty. 2000. Image-based virtual camera motion strategies. In Graphics Interface Conference, GI'00. Morgan Kaufmann, 69--76.
[27]
Jianyuan Min and Jinxiang Chai. 2012. Motion Graphs++: A Compact Generative Model for Semantic Motion Analysis and Synthesis. ACM Trans. Graph. 31, 6, Article 153 (November 2012), 12 pages.
[28]
Patrick Olivier, Nicolas Halper, Jon Pickering, and Pamela Luna. 1999. Visual composition as optimisation. In AISB symposium on AI and creativity in entertainment and visual art, Vol. 1. 22--30.
[29]
Thomas Oskam, Robert W. Sumner, Nils Thuerey, and Markus Gross. 2009. Visibility Transition Planning for Dynamic Camera Control. In Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. Association for Computing Machinery, New York, NY, USA, 55--65.
[30]
Andrew Witkin and Michael Kass. 1988. Spacetime Constraints. SIGGRAPH Comput. Graph. 22, 4 (1988), 159--168.
[31]
Xinyi Zhang and Michiel van de Panne. 2018. Data-Driven Autocompletion for Keyframe Animation. In Proceedings of the 11th Annual International Conference on Motion, Interaction, and Games. Association for Computing Machinery, New York, NY, USA, Article 10, 11 pages.

Cited By

View all
  • (2024)Evaluating Visual Perception of Object Motion in Dynamic EnvironmentsACM Transactions on Graphics10.1145/368791243:6(1-12)Online publication date: 19-Nov-2024
  • (2024)CineMPC: A Fully Autonomous Drone Cinematography System Incorporating Zoom, Focus, Pose, and Scene CompositionIEEE Transactions on Robotics10.1109/TRO.2024.335355040(1740-1757)Online publication date: 1-Jan-2024
  • (2023)Real-time Computational Cinematographic Editing for Broadcasting of Volumetric-captured events: an Application to Ultimate FightingProceedings of the 16th ACM SIGGRAPH Conference on Motion, Interaction and Games10.1145/3623264.3624468(1-8)Online publication date: 15-Nov-2023
  • Show More Cited By

Index Terms

  1. Camera keyframing with style and control

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Graphics
    ACM Transactions on Graphics  Volume 40, Issue 6
    December 2021
    1351 pages
    ISSN:0730-0301
    EISSN:1557-7368
    DOI:10.1145/3478513
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 10 December 2021
    Published in TOG Volume 40, Issue 6

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. camera behaviors
    2. machine learning
    3. virtual cinematography

    Qualifiers

    • Research-article

    Funding Sources

    • This work was supported in part by the National Key R&D Program of China (2019YFF0302902).

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)98
    • Downloads (Last 6 weeks)6
    Reflects downloads up to 22 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Evaluating Visual Perception of Object Motion in Dynamic EnvironmentsACM Transactions on Graphics10.1145/368791243:6(1-12)Online publication date: 19-Nov-2024
    • (2024)CineMPC: A Fully Autonomous Drone Cinematography System Incorporating Zoom, Focus, Pose, and Scene CompositionIEEE Transactions on Robotics10.1109/TRO.2024.335355040(1740-1757)Online publication date: 1-Jan-2024
    • (2023)Real-time Computational Cinematographic Editing for Broadcasting of Volumetric-captured events: an Application to Ultimate FightingProceedings of the 16th ACM SIGGRAPH Conference on Motion, Interaction and Games10.1145/3623264.3624468(1-8)Online publication date: 15-Nov-2023
    • (2023)A Drone Video Clip Dataset and its Applications in Automated CinematographyComputer Graphics Forum10.1111/cgf.1466841:7(189-203)Online publication date: 20-Mar-2023
    • (2023)“Can You Move It?”: The Design and Evaluation of Moving VR Shots in Sport Broadcast2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)10.1109/ISMAR59233.2023.00099(839-848)Online publication date: 16-Oct-2023
    • (2023)CineTransfer: Controlling a Robot to Imitate Cinematographic Style from a Single Example2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)10.1109/IROS55552.2023.10342280(10044-10049)Online publication date: 1-Oct-2023
    • (2023)GAIT: Generating Aesthetic Indoor Tours with Deep Reinforcement Learning2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.00681(7375-7385)Online publication date: 1-Oct-2023
    • (2023)Learning Generalizable Light Field Networks from Few ImagesICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP49357.2023.10096979(1-5)Online publication date: 4-Jun-2023
    • (2023)JAWS: Just A Wild Shot for Cinematic Transfer in Neural Radiance Fields2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52729.2023.01624(16933-16942)Online publication date: Jun-2023
    • (2023)Warping character animations using visual motion featuresComputers and Graphics10.1016/j.cag.2022.11.008110:C(38-48)Online publication date: 1-Feb-2023
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media