Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
Visual Speech Synthesis by Morphing VisemesDecember 1999
1999 Technical Report
Publisher:
  • Massachusetts Institute of Technology
  • 201 Vassar Street, W59-200 Cambridge, MA
  • United States
Published:01 December 1999
Reflects downloads up to 03 Oct 2024Bibliometrics
Skip Abstract Section
Abstract

We present MikeTalk, a text-to-audiovisual speech synthesizer which converts input text into an audiovisual speech stream. MikeTalk is built using visemes, which are a small set of images spanning a large range of mouth shapes. The visemes are acquired from a recorded visual corpus of a human subject which is specifically designed to elicit one instantiation of each viseme. Using optical flow methods, correspondence from every viseme to every other viseme is computed automatically. By morphing along this correspondence, a smooth transition between viseme images may be generated. A complete visual utterance is constructed by concatenating viseme transitions. Finally, phoneme and timing information extracted from a text-to-speech synthesizer is exploited to determine which viseme transitions to use, and the rate at which the morphing process should occur. In this manner, we are able to synchronize the visual speech stream with the audio speech stream, and hence give the impression of a photorealistic talking face.

Contributors
  • Center for Biological & Computational Learning
  • Massachusetts Institute of Technology

Recommendations