Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A System for Efficient 3D Printed Stop-motion Face Animation

Published: 18 October 2019 Publication History
  • Get Citation Alerts
  • Abstract

    Computer animation in conjunction with 3D printing has the potential to positively impact traditional stop-motion animation. As 3D printing every frame of a computer animation is prohibitively slow and expensive, 3D printed stop-motion can only be viable if animations can be faithfully reproduced using a compact library of 3D printed and efficiently assemblable parts. We thus present the first system for processing computer animation sequences (typically faces) to produce an optimal set of replacement parts for use in 3D printed stop-motion animation. Given an input animation sequence of topology invariant deforming meshes, our problem is to output a library of replacement parts and per-animation-frame assignment of the parts, such that we maximally approximate the input animation, while minimizing the amount of 3D printing and assembly. Inspired by current stop-motion workflows, a user manually indicates which parts of the model are preferred for segmentation; then, we find curves with minimal deformation along which to segment the mesh. We then present a novel algorithm to zero out deformations along the segment boundaries, so that replacement sets for each part can be interchangeably and seamlessly assembled together. The part boundaries are designed to ease 3D printing and instrumentation for assembly. Each part is then independently optimized using a graph-cut technique to find a set of replacements, whose size can be user defined, or automatically computed to adhere to a printing budget or allowed deviation from the original animation. Our evaluation is threefold: we show results on a variety of facial animations, both digital and 3D printed, critiqued by a professional animator; we show the impact of various algorithmic parameters; and we compare our results to naive solutions. Our approach can reduce the printing time and cost significantly for stop-motion animated films.

    Supplementary Material

    abdrashitov (abdrashitov.zip)
    Supplemental movie and image files for, A System for Efficient 3D Printed Stop-motion Face Animation

    References

    [1]
    Marc Alexa and Wolfgang Müller. 2000. Representing animations by principal components. Comput. Graph. Forum 19, 3 (2000), 411--418.
    [2]
    Jed Alger, Travis Knight, and Chris Butler. 2012. The Art and Making of ParaNorman. Chronicle Books.
    [3]
    Moritz Bäecher, Benjamin Hepp, Fabrizio Pece, Paul G. Kry, Bernd Bickel, Bernhard Thomaszewski, and Otmar Hilliges. 2016. DefSense: Computational design of customized deformable input devices. In Proceedings of the SIGCHI.
    [4]
    Connelly Barnes, David E. Jacobs, Jason Sanders, Dan B. Goldman, Szymon Rusinkiewicz, Adam Finkelstein, and Maneesh Agrawala. 2008. Video Puppetry: A performative interface for cutout animation. ACM Trans. Graph. 27, 5 (2008), 124.
    [5]
    Miklós Bergou, Saurabh Mathur, Max Wardetzky, and Eitan Grinspun. 2007. TRACKS: Toward directable thin shells. In ACM SIGGRAPH 2007 Papers (SIGGRAPH’07). ACM. http://doi.acm.org/10.1145/1275808.1276439
    [6]
    Bernd Bickel, Peter Kaufmann, Mélina Skouras, Bernhard Thomaszewski, Derek Bradley, Thabo Beeler, Phil Jackson, Steve Marschner, Wojciech Matusik, and Markus Gross. 2012. Physical face cloning. ACM Trans. Graph. 31, 4 (2012), 118.
    [7]
    Leonardo Bonanni and Hiroshi Ishii. 2009. Stop-motion prototyping for tangible interfaces. In Proceedings of the 3rd International Conference on Tangible and Embedded Interaction (TEI’09). ACM, 315--316. http://doi.acm.org/10.1145/1517664.1517729
    [8]
    Yuri Boykov and Vladimir Kolmogorov. 2004. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and Machine Intelligence 26, 9 (2004), 1124--1137.
    [9]
    Gabriel J. Brostow and Irfan Essa. 2001. Image-based motion blur for stop motion animation. In Proceedings of the ACM SIGGRAPH.
    [10]
    Patrick Coleman, Jacobo Bibliowicz, Karan Singh, and Michael Gleicher. 2008. Staggered poses: A character motion representation for detail-preserving editing of pose and coordinated timing. In Proceedings of the SCA.
    [11]
    Piotr Didyk, Elmar Eisemann, Tobias Ritschel, Karol Myszkowski, and Hans-Peter Seidel. 2010. Perceptually motivated real-time temporal upsampling of 3D content for high-refresh-rate displays. Comput. Graph. Forum 29, 2 (2010), 713--722.
    [12]
    Marek Dvorožnák, Wilmot Li, Vladimir G. Kim, and Daniel Sỳkora. 2018. Toonsynth: Example-based synthesis of hand-colored cartoon animations. ACM Trans. Graph. 37, 4 (2018), 167.
    [13]
    Pif Edwards, Chris Landreth, Eugene Fiume, and K. Singh. 2016. JALI: An animator-centric viseme model for expressive lip synchronization. ACM Trans. Graph. 35, 4 (2016), 127:1--127:11. http://doi.acm.org/10.1145/2897824.2925984
    [14]
    Michael Elad and Michal Aharon. 2006. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15, 12 (2006), 3736--3745.
    [15]
    Jakub Fišer, Ondřej Jamriška, David Simons, Eli Shechtman, Jingwan Lu, Paul Asente, Michal Lukáč, and Daniel Sỳkora. 2017. Example-based synthesis of stylized facial animations. ACM Trans. Graph. 36, 4 (2017), 155.
    [16]
    Soumya Ghosh, Erik B. Sudderth, Matthew Loper, and Michael J. Black. 2012. From deformations to parts—Motion-based segmentation of 3D objects. Advances in Neural Information Processing Systems. 1997--2005.
    [17]
    Qin Gu, Jingliang Peng, and Zhigang Deng. 2009. Compression of human motion capture data using motion pattern indexing. Comput. Graph. Forum 28, 1 (2009), 1--12.
    [18]
    Gaël Guennebaud, Benoît Jacob et al. 2010. Eigen v3. Retrieved from http://eigen.tuxfamily.org.
    [19]
    Xiaoguang Han, Hongbo Fu, Hanlin Zheng, Ligang Liu, and Jue Wang. 2014. A video-based system for hand-driven stop-motion animation. IEEE Computer Graphics and Applications 33, 6 (2013), 70--81.
    [20]
    Philipp Herholz, Wojciech Matusik, and Marc Alexa. 2015. Approximating free-form geometry with height fields for manufacturing. Comput. Graph. Forum 34, 2 (May 2015), 239--251.
    [21]
    Ruizhen Hu, Honghua Li, Hao Zhang, and Daniel Cohen-Or. 2014. Approximate pyramidal shape decomposition. ACM Trans. Graph. 33, 6 (Nov. 2014).
    [22]
    Alec Jacobson, Daniele Panozzo et al. 2017. libigl: A simple C++ geometry processing library. Retrieved from http://libigl.github.io/libigl/.
    [23]
    Alec Jacobson, Elif Tosun, Olga Sorkine, and Denis Zorin. 2010. Mixed finite elements for variational surface modeling. Computer Graphics Forum (Proceedings of EUROGRAPHICS/ACM SIGGRAPH Symposium on Geometry Processing) 29, 5 (2010), 1565--1574.
    [24]
    Doug L. James and Christopher D. Twigg. 2005. Skinning mesh animations. ACM Trans. Graph. 24, 3 (July 2005), 399--407.
    [25]
    Se-Won Jeong and Jae-Young Sim. 2014. Multiscale saliency detection for 3D meshes using random walk. Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific. IEEE, 1--4.
    [26]
    Brian Knep, Craig Hayes, Rick Sayre, and Tom Williams. 1995. Dinosaur input device. In Proceedings of the CHI. 304--309.
    [27]
    Lynn Kolevsohn. 2009. Objet geometries’ 3D printers play starring role in new animated film Coraline. Retrieved from http://www.prnewswire.co.uk/news-releases/objet-geometries-3-d-printers-play-starring-role-in-new-animated-film-coraline-155479455.html.
    [28]
    Vladimir Kolmogorov and Ramin Zabin. 2004. What energy functions can be minimized via graph cuts? IEEE Transactions on Pattern Analysis 8 Machine Intelligence 2 (2004), 147--159.
    [29]
    Jan Eric Kyprianidis, John Collomosse, Tinghuai Wang, and Tobias Isenberg. 2013. State of the “Art”: A taxonomy of artistic stylization techniques for images and video. IEEE Trans. Visual. Comput. Graph. 19, 5 (May 2013), 866--885.
    [30]
    Binh Huy Le and Zhigang Deng. 2013. Two-layer sparse compression of dense-weight blend skinning. ACM Trans. Graph. 32, 4 (2013), 124.
    [31]
    Binh Huy Le and Zhigang Deng. 2014. Robust and accurate skeletal rigging from mesh sequences. ACM Trans. Graph. 33, 4 (2014), 84:1--84:10. http://doi.acm.org/10.1145/2601097.2601161
    [32]
    Tong-Yee Lee, Yu-Shuen Wang, and Tai-Guang Chen. 2006. Segmenting a deforming mesh into near-rigid components. Visual Comput. 22, 9--11 (2006), 729--739.
    [33]
    John P. Lewis, Ken Anjyo, Taehyun Rhee, Mengjie Zhang, Frédéric H. Pighin, and Zhigang Deng. 2014. Practice and theory of blendshape facial models. Eurographics (State of the Art Reports) 1, 8 (2014), 2.
    [34]
    Stuart P. Lloyd. 1982. Least squares quantization in PCM. IEEE Trans. Info. Theory 28 (1982), 129--137.
    [35]
    Linjie Luo, Ilya Baran, Szymon Rusinkiewicz, and Wojciech Matusik. 2012. Chopper: Partitioning models into 3D-printable parts. ACM Trans. Graph. 31, 6 (2012), 129:1--129:9. http://doi.acm.org/10.1145/2366145.2366148
    [36]
    Mekado Murphy. 2015. Showing the seams in “Anomalisa.” Retrieved from https://www.nytimes.com/interactive/2015/12/18/movies/anomalisa-behind-the-scenes.html.
    [37]
    Thomas Neumann, Kiran Varanasi, Stephan Wenger, Markus Wacker, Marcus A. Magnor, and Christian Theobalt. 2013. Sparse localized deformation components. ACM Trans. Graph. 32, 6 (2013), 179:1--179:10. http://doi.acm.org/10.1145/2508363.2508417
    [38]
    Seung-Tak Noh and Takeo Igarashi. 2017. Retouch transfer for 3D printed face replica with automatic alignment. In Proceedings of the CGI.
    [39]
    Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, and Naokazu Yokoya. 2016. Video summarization using deep semantic features. In Proceedings of the Asian Conference on Computer Vision.
    [40]
    Kenneth A. Priebe. 2011. The Advanced Art of Stop-motion Animation. Cengage Learning.
    [41]
    Roger Ribera, Eduard Zell, J. P. Lewis, Junyong Noh, and Mario Botsch. 2017. Facial retargeting with automatic range of motion alignment. ACM Trans. Graph. 36, 4 (2017), 154:1--154:12. http://doi.acm.org/10.1145/3072959.3073674
    [42]
    Mirko Sattler, Ralf Sarlette, and Reinhard Klein. 2005. Simple and efficient compression of animation sequences. In Proceedings of the SCA. ACM, New York, NY, 209--217.
    [43]
    Jaewoo Seo, Geoffrey Irving, J. P. Lewis, and Junyong Noh. 2011. Compression and direct manipulation of complex blendshape models. ACM Trans. Graph. 30, 6 (2011), 164:1--164:10. http://doi.acm.org/10.1145/2070781.2024198
    [44]
    Ariel Shamir. 2008. A survey on mesh segmentation techniques. Comput. Graph. Forum 27, 6 (2008), 1539--1556.
    [45]
    Karan Singh and Eugene Fiume. 1998. Wires: A geometric deformation technique. In Proceedings of the ACM SIGGRAPH.
    [46]
    Peng Song, Chi-Wing Fu, and Daniel Cohen-Or. 2012. Recursive interlocking puzzles. ACM Trans. Graph. 31, 6 (2012), 128.
    [47]
    L. Vasa, S. Marras, K. Hormann, and G. Brunnett. 2014. Compressing dynamic meshes with geometric laplacians. Comput. Graph. Forum 33, 2 (2014), 145--154.
    [48]
    Jue Wang, Yingqing Xu, Heung-Yeung Shum, and Michael F. Cohen. 2004. Video tooning. In Proceedings of the ACM SIGGRAPH.
    [49]
    W. M. Wang, C. Zanni, and L. Kobbelt. 2016. Improved surface quality in 3D printing by optimizing the printing direction. In Proceedings of theEG. Eurographics Association, Goslar Germany, Germany, 59--70.
    [50]
    Brian Whited, Gioacchino Noris, Maryann Simmons, Robert W. Sumner, Markus H. Gross, and Jarek Rossignac. 2010. BetweenIT—An interactive tool for tight inbetweening. Comput. Graph. Forum (2010).
    [51]
    Andrew P. Witkin and Michael Kass. 1988. Spacetime constraints. In Proceedings of the 15th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’88). 159--168. http://doi.acm.org/10.1145/54852.378507
    [52]
    John Wright, Yi Ma, Julien Mairal, Guillermo Sapiro, Thomas S. Huang, and Shuicheng Yan. 2010. Sparse representation for computer vision and pattern recognition. Proc. IEEE 98, 6 (2010), 1031--1044.
    [53]
    Y. Y. Boykov, O. Veksler, and R. Zabih. 2001. Efficient approximate energy minimization via graph cuts. IEEE TPAMI 20, 12 (2001), 1222--1239.
    [54]
    Xiaoting Zhang, Xinyi Le, Athina Panotopoulou, Emily Whiting, and Charlie C. L. Wang. 2015. Perceptual models of preference in 3D printing direction. ACM Trans. Graph. 34, 6 (2015), 215:1--215:12. http://doi.acm.org/10.1145/2816795.2818121

    Cited By

    View all
    • (2024)3D Animation Production and Generation Based on Human Body Base 3D ModelsApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-00539:1Online publication date: 31-Jan-2024
    • (2023)The Blend of Reality and Illusion – Analysis of the Artistic Characteristics of Stop-Motion AnimationProceedings of the 2022 4th International Conference on Literature, Art and Human Development (ICLAHD 2022)10.2991/978-2-494069-97-8_69(545-551)Online publication date: 11-Feb-2023
    • (2023)Design of a Group Animation Production System Based on Artificial Intelligence Algorithms2023 2nd International Conference on 3D Immersion, Interaction and Multi-sensory Experiences (ICDIIME)10.1109/ICDIIME59043.2023.00103(500-503)Online publication date: Jul-2023
    • Show More Cited By

    Index Terms

    1. A System for Efficient 3D Printed Stop-motion Face Animation

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Transactions on Graphics
          ACM Transactions on Graphics  Volume 39, Issue 1
          February 2020
          112 pages
          ISSN:0730-0301
          EISSN:1557-7368
          DOI:10.1145/3366374
          Issue’s Table of Contents
          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 18 October 2019
          Accepted: 01 July 2019
          Revised: 01 May 2019
          Received: 01 September 2018
          Published in TOG Volume 39, Issue 1

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. 3D printing
          2. Stop-motion
          3. facial animation
          4. film production
          5. stylization

          Qualifiers

          • Research-article
          • Research
          • Refereed

          Funding Sources

          • NSERC CRDPJ

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)66
          • Downloads (Last 6 weeks)1
          Reflects downloads up to 26 Jul 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)3D Animation Production and Generation Based on Human Body Base 3D ModelsApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-00539:1Online publication date: 31-Jan-2024
          • (2023)The Blend of Reality and Illusion – Analysis of the Artistic Characteristics of Stop-Motion AnimationProceedings of the 2022 4th International Conference on Literature, Art and Human Development (ICLAHD 2022)10.2991/978-2-494069-97-8_69(545-551)Online publication date: 11-Feb-2023
          • (2023)Design of a Group Animation Production System Based on Artificial Intelligence Algorithms2023 2nd International Conference on 3D Immersion, Interaction and Multi-sensory Experiences (ICDIIME)10.1109/ICDIIME59043.2023.00103(500-503)Online publication date: Jul-2023
          • (2021)Interactive modelling of volumetric musculoskeletal anatomyACM Transactions on Graphics10.1145/3476576.347669140:4(1-13)Online publication date: Aug-2021
          • (2021)Interactive modelling of volumetric musculoskeletal anatomyACM Transactions on Graphics10.1145/3450626.345976940:4(1-13)Online publication date: 19-Jul-2021
          • (2020)Interactive Exploration and Refinement of Facial Expression using Manifold LearningProceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology10.1145/3379337.3415877(778-790)Online publication date: 20-Oct-2020
          • (2020)Dynamic Modeling of Interactive Scene in 3D Animation Teaching2020 International Conference on Robots & Intelligent System (ICRIS)10.1109/ICRIS52159.2020.00167(665-668)Online publication date: Dec-2020

          View Options

          Get Access

          Login options

          Full Access

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format.

          HTML Format

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media