Audiovisual Inputs for Learning Robust, Real-time Facial Animation with Lip Sync
Abstract
Supplementary Material
- Download
- 185.56 MB
References
Index Terms
- Audiovisual Inputs for Learning Robust, Real-time Facial Animation with Lip Sync
Recommendations
From 2D to 3D real-time expression transfer for facial animation
In this paper, we present a three-stage approach, which creates realistic facial animations by tracking expressions of a human face in 2D and transferring them to a human-like 3D model in real-time. Our calibration-free method, which is based on an ...
Audio2Rig: Artist-oriented deep learning tool for facial and lip sync animation
SIGGRAPH '24: ACM SIGGRAPH 2024 TalksCreating realistic or stylized facial and lip sync animation is a tedious task. It requires lot of time and skills to sync the lips with audio and convey the right emotion to the character’s face. To allow animators to spend more time on the artistic ...
Efficient lip-synch tool for 3D cartoon animation
CASA'2008 Special IssueWe propose a set of algorithms to efficiently make speech animation for 3D cartoon characters. Our prototype system is based on blendshapes, a linear interpolation technique, which is widely used in facial animation practice. In our system, a few base ...
Comments
Information & Contributors
Information
Published In
![cover image ACM Conferences](/cms/asset/83f8f372-c4e4-44be-9300-e24bfd5e581a/3623264.cover.jpg)
Sponsors
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Research-article
- Research
- Refereed limited
Conference
Acceptance Rates
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 643Total Downloads
- Downloads (Last 12 months)452
- Downloads (Last 6 weeks)43
Other Metrics
Citations
View Options
View options
View or Download as a PDF file.
PDFeReader
View online with eReader.
eReaderHTML Format
View this article in HTML Format.
HTML FormatLogin options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in