Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Thinking Like a Director: Film Editing Patterns for Virtual Cinematographic Storytelling

Published: 23 October 2018 Publication History

Abstract

This article introduces Film Editing Patterns (FEP), a language to formalize film editing practices and stylistic choices found in movies. FEP constructs are constraints, expressed over one or more shots from a movie sequence, that characterize changes in cinematographic visual properties, such as shot sizes, camera angles, or layout of actors on the screen. We present the vocabulary of the FEP language, introduce its usage in analyzing styles from annotated film data, and describe how it can support users in the creative design of film sequences in 3D. More specifically, (i) we define the FEP language, (ii) we present an application to craft filmic sequences from 3D animated scenes that uses FEPs as a high level mean to select cameras and perform cuts between cameras that follow best practices in cinema, and (iii) we evaluate the benefits of FEPs by performing user experiments in which professional filmmakers and amateurs had to create cinematographic sequences. The evaluation suggests that users generally appreciate the idea of FEPs, and that it can effectively help novice and medium experienced users in crafting film sequences with little training.

Supplementary Material

wu.mp4 (wu.zip)
Supplemental movie and image files for, Thinking Like a Director: Film Editing Patterns for Virtual Cinematographic Storytelling

References

[1]
Dan Amerson and Shaun Kime. 2005. Real-time cinematic camera control for interactive narratives. In Proceedings of the ACM SIGCHI International Conference on Advances in Computer Entertainment Technology. ACM Press, 369--369.
[2]
William H. Bares, Joël P. Grégoire, and James C. Lester. 1998. Realtime constraint-based cinematography for complex interactive 3D worlds. In Proceedings of the National Conference On Artificial Intelligence. Citeseer, 1101--1106.
[3]
William H. Bares, Somying Thainimit, and Scott Mcdermott. 2000. A model for constraint-based camera planning. In Proceedings of the AAAI Spring Symposium.
[4]
Jacopo Maria Corridoni and Alberto Del Bimbo. 1998. Structured representation and automatic indexing of movie information content. Pattern Recogn. 31, 12 (1998), 2027--2045.
[5]
David B. Christianson, Sean E. Anderson, Li-wei He, David H. Salesin, Daniel S. Weld, and Michael F. Cohen. 1996. Declarative camera control for automatic cinematography. Proceedings of the AAAI Conference on Artificial Intelligence.
[6]
Nicolas Davis, Alexander Zook, Brian O’Neill, Brandon Headrick, Mark Riedl, Ashton Grosz, and Nitsche Michael. 2013. Creativity support for novice digital filmmaking. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2013), 651--660.
[7]
Steven M. Drucker and David Zeltzer. 1994. Intelligent camera control in a virtual environment. In Graphics Interface 94. 190--199.
[8]
David K. Elson and Mark O. Riedl. 2007. A lightweight intelligent virtual cinematography system for machinima production. In Proceedings of the 3rd Conference on Artificial Intelligence and Interactive Digital Entertainment.
[9]
Quentin Galvane, Marc Christie, Rémi Ronfard, Chen-Kim Lim, and Marie-Paule Cani. 2013. Steering behaviors for autonomous cameras. In Proceedings of the Conference on Motion on Games (MIG’13). 93--102.
[10]
Quentin Galvane, Rémi Ronfard, Christophe Lino, and Marc Christie. 2015. Continuity editing for 3D animation. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI Press). Austin, Texas, United States.
[11]
Li-Wei He, Michael F. Cohen, and David H. Salesin. 1996. The virtual cinematographer: A paradigm for automatic real-time camera control and directing. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’96). ACM Press, 217--224. Retrieved from
[12]
Arnav Jhala and R. Michael Young. 2010. Cinematic visual discourse : Representation, generation, and evaluation. IEEE Trans. Comput. Intell. AI Games 2, 2 (2010), 69--81.
[13]
Donald E. Knuth, James H. Morris, and Vaughan R. Pratt. 1977. Fast pattern matching in strings. In SIAM J. Comput. 323--350.
[14]
Mackenzie Leake, Abe Davis, Anh Truong, and Maneesh Agrawala. 2017. Computational video editing for dialogue-driven scenes. ACM Trans. Graph. 36, 4 (2017). Retrieved from
[15]
Christophe Lino and Marc Christie. 2015. Intuitive and efficient camera control with the toric space. Trans. Graph. 34, 4 (2015).
[16]
Christophe Lino, Marc Christie, Fabrice Lamarche, G. Schofield, and Patrick Olivier. 2010. A real-time cinematography system for interactive 3D environments. In Proceedings of the ACM SIGGRAPH Eurographics Symposium on Computer Animation. 139--148.
[17]
Christophe Lino, Marc Christie, Roberto Ranon, and William H. Bares. 2011. The director’s lens: An intelligent assistant for virtual cinematography. In Proceedings of the 19th ACM International Conference on Multimedia. 323--332.
[18]
Zeeshan Rasheed, Yaser Sheikh, and Mubarak Shah. 2005. On the use of computable features for film classification. IEEE Trans. Circ. Syst. Video Technol. 15, 1 (2005), 52--63.
[19]
Rémi Ronfard, Gandhi Vineet, and Laurent Boiron. 2013. The prose storyboard language. In Proceedings of the AAAI Workshop on Intelligent Cinematography and Editing.
[20]
M. Svanera, S. Benini, N. Adami, R. Leonardi, and A. B. Kovcs. 2015. Over-the-shoulder shot detection in art films. In Proceedings of the International Workshop on Content-Based Multimedia Indexing.
[21]
Wallapak Tavanapong and Junyu Zhou. 2004. Shot clustering techniques for story browsing. IEEE Trans. Multimedia 6(4) (2004), 517--527.
[22]
Roy Thompson and Christopher J. Bowen. 2009. Grammar of the Shot. Routledge.
[23]
Jihua Wang and Tat-Seng Chua. 2003. A cinematic-based framework for scene boundary detection in video. Visual Comput. 19, 5 (2003), 329--341.
[24]
Hui-Yin Wu and Marc Christie. 2016. Analysing cinematography with embedded constrained patterns. In Proceedings of 2016 Eurographics Workshop on Intelligent Cinematography and Editing.
[25]
Hui-Yin Wu, Quentin Galvane, Christophe Lino, and Marc Christie. 2017. Analyzing elements of style in annotated film clips. In Proceedings of 2017 Eurographics Workshop on Intelligent Cinematography and Editing. 7. Retrieved from
[26]
Herbert Zettl. 2007. Sight, Sound, Motion: Applied Media Aesthetics. Wadsworth Publishing Company.

Cited By

View all
  • (2023)AMSA: Adaptive Multimodal Learning for Sentiment AnalysisACM Transactions on Multimedia Computing, Communications, and Applications10.1145/357291519:3s(1-21)Online publication date: 24-Feb-2023
  • (2023)Boosting Scene Graph Generation with Visual Relation SaliencyACM Transactions on Multimedia Computing, Communications, and Applications10.1145/351404119:1(1-17)Online publication date: 5-Jan-2023
  • (2023)ReferencesVisualization, Visual Analytics and Virtual Reality in Medicine10.1016/B978-0-12-822962-0.00025-0(477-538)Online publication date: 2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Multimedia Computing, Communications, and Applications
ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 14, Issue 4
Special Section on Deep Learning for Intelligent Multimedia Analytics
November 2018
221 pages
ISSN:1551-6857
EISSN:1551-6865
DOI:10.1145/3282485
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 October 2018
Accepted: 01 July 2018
Revised: 01 May 2018
Received: 01 December 2017
Published in TOMM Volume 14, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. 3D animation
  2. Film storytelling
  3. assisted creativity
  4. editing
  5. virtual cinematography

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)247
  • Downloads (Last 6 weeks)36
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2023)AMSA: Adaptive Multimodal Learning for Sentiment AnalysisACM Transactions on Multimedia Computing, Communications, and Applications10.1145/357291519:3s(1-21)Online publication date: 24-Feb-2023
  • (2023)Boosting Scene Graph Generation with Visual Relation SaliencyACM Transactions on Multimedia Computing, Communications, and Applications10.1145/351404119:1(1-17)Online publication date: 5-Jan-2023
  • (2023)ReferencesVisualization, Visual Analytics and Virtual Reality in Medicine10.1016/B978-0-12-822962-0.00025-0(477-538)Online publication date: 2023
  • (2023)Medical animationsVisualization, Visual Analytics and Virtual Reality in Medicine10.1016/B978-0-12-822962-0.00013-4(117-156)Online publication date: 2023
  • (2022)Content-Based Collaborative Filtering With Predictive Error Reduction-Based CNN Using IPU ModelInternational Journal of Information Security and Privacy10.4018/IJISP.30830916:2(1-19)Online publication date: 16-Sep-2022
  • (2022)Where to look at the movies: Analyzing visual attention to understand movie editingBehavior Research Methods10.3758/s13428-022-01949-7Online publication date: 24-Aug-2022
  • (2022)Text Mining of Movie Animation User Comments and Video Artwork Recommendation Based on Machine LearningSecurity and Communication Networks10.1155/2022/28004812022Online publication date: 1-Jan-2022
  • (2022)Shoot360: Normal View Video Creation from City Panorama FootageSpecial Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings10.1145/3528233.3530702(1-9)Online publication date: 7-Aug-2022
  • (2022)Can a Robot Become a Movie Director? Learning Artistic Principles for Aerial Cinematography2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)10.1109/IROS40897.2019.8967592(1107-1114)Online publication date: 28-Dec-2022
  • (2022)The Anatomy of Video Editing: A Dataset and Benchmark Suite for AI-Assisted Video EditingComputer Vision – ECCV 202210.1007/978-3-031-20074-8_12(201-218)Online publication date: 23-Oct-2022
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media