default search action
AVSP 2008: Moreton Island, Queensland, Australia
- Roland Göcke, Patrick Lucey, Simon Lucey:
International Conference on Auditory-Visual Speech Processing 2008, Moreton Island, Queensland, Australia, September 26-29, 2008. ISCA 2008
Invited Papers
- Eric Vatikiotis-Bateson:
Concurrency, synchrony, and temporal organization. 1 - Jeffrey F. Cohn:
Facial dynamics reveals person identity and communicative intent, regulates person perception and social interaction. 3 - Iain A. Matthews:
Active appearance models for facial analysis. 5
Contributed Papers
- Barry-John Theobald, Nicholas Wilkinson, Iain A. Matthews:
On evaluating synthesised visual speech. 7-12 - Sidney S. Fels, Robert Pritchard, Eric Vatikiotis-Bateson:
Building a portable gesture-to-audio/visual speech system. 13-18 - Douglas Brungart, Nandini Iyer, Brian D. Simpson, Virginie van Wassenhove:
The effects of temporal asynchrony on the intelligibility of accelerated speech. 19-24 - Josef Chaloupka, Jan Nouza, Jindrich Zdánský:
Audio-visual voice command recognition in noisy conditions. 25-30 - Gianluca Giorgolo, Frans A. J. Verstraten:
Perception of 'speech-and-gesture2 integration. 31-36 - Carlos Toshinori Ishi, Hiroshi Ishiguro, Norihiro Hagita:
Analysis of inter- and intra-speaker variability of head motions during spoken dialogue. 37-42 - Sascha Fagel, Gérard Bailly:
German text-to-audiovisual-speech by 3-d speaker cloning. 43-46 - Dawn M. Behne, Yue Wang, Stein-Ove Belsby, Solveig Kaasa, Lisa Simonsen, Kirsti Back:
Visual field advantage in the perception of audiovisual speech segments. 47-50 - Satoshi Tamura, Chiyomi Miyajima, Norihide Kitaoka, Satoru Hayamizu, Kazuya Takeda:
CENSREC-AV: evaluation frameworks for audio-visual speech recognition. 51-54 - Christian Kroos, Ashlie Dreves:
Mcgurk effect persists with a partially removed visual signal. 55-58 - Sascha Fagel, Katja Madany:
Guided non-linear model estimation (gnoME). 59-62 - Emilie Troille, Marie-Agnès Cathiard, Christian Abry, Lucie Ménard, Denis Beautemps:
Multimodal perception of anticipatory behavior - Comparing blind, hearing and cued speech subjects. 63-68 - Patrick Lucey, Gerasimos Potamianos, Sridha Sridharan:
Patch-based analysis of visual speech from multiple views. 69-74 - Sascha Fagel, Christine Kühnel, Benjamin Weiss, Ina Wechsung, Sebastian Möller:
A comparison of German talking heads in a smart home environment. 75-78 - Shuichi Sakamoto, Akihiro Tanaka, Shun Numahata, Atsushi Imai, Tohru Takagi, Yôiti Suzuki:
Effect of audio-visual asynchrony between time-expanded speech and a moving image of a talker2s face on detection and tolerance thresholds. 79-82 - Bernd J. Kröger, Jim Kannampuzha:
A neurofunctional model of speech production including aspects of auditory and audio-visual speech perception. 83-88 - Marion Dohen, Chun-Huei Wu, Harold Hill:
Auditory-visual perception of prosodic information: inter-linguistic analysis - contrastive focus in French and Japanese. 89-94 - Maeva Garnier:
May speech modifications in noise contribute to enhance audio-visible cues to segment perception? 95-100 - Alexandra Jesse, Elizabeth K. Johnson:
Audiovisual alignment in child-directed speech facilitates word learning. 101-106 - Jeesun Kim, Christian Kroos, Chris Davis:
Hearing a talking face: an auditory influence on a visual detection task. 107-110 - Gérard Bailly, Antoine Bégault, Frédéric Elisei, Pierre Badin:
Speaking with smile or disgust: data and models. 111-114 - Girija Chetty, Michael Wagner:
A multilevel fusion approach for audiovisual emotion recognition. 115-120 - Junru Wu, Xiaosheng Pan, Jiangping Kong, Alan Wee-Chung Liew:
Statistical correlation analysis between lip contour parameters and formant parameters for Mandarin monophthongs. 121-126 - Denis Burnham, Arman Abrahamyan, Lawrence Cavedon, Chris Davis, Andrew Hodgins, Jeesun Kim, Christian Kroos, Takaaki Kuratate, Trent W. Lewis, Martin H. Luerssen, Garth Paine, David M. W. Powers, Marcia Riley, Stelarc, Kate Stevens:
From talking to thinking heads: report 2008. 127-130 - Adriano Vilela Barbosa, Hani C. Yehia, Eric Vatikiotis-Bateson:
Algorithm for computing spatiotemporal coordination. 131-136 - David Dean, Sridha Sridharan:
Fused HMM adaptation of synchronous HMMs for audio-visual speaker verification. 137-141 - Piero Cosi, Graziano Tisato:
Describing "INTERFACE" a matlabÉ tool for building talking heads. 143-146 - Milos Zelezný:
Analysis of technologies and resources for multimodal information kiosk for deaf users. 147-152 - Gérard Bailly, Yu Fang, Frédéric Elisei, Denis Beautemps:
Retargeting cued speech hand gestures for different talking heads and speakers. 153-158 - Björn Lidestam:
A, v, and AV discrimination of vowel duration. 159-162 - Goranka Zoric, Igor S. Pandzic:
Towards real-time speech-based facial animation applications built on HUGE architecture. 163-166 - Patrick Lucey, Jessica Howlett, Jeffrey F. Cohn, Simon Lucey, Sridha Sridharan, Zara Ambadar:
Improving pain recognition through better utilisation of temporal information. 167-172 - Adriano Vilela Barbosa, Hani C. Yehia, Eric Vatikiotis-Bateson:
Linguistically valid movement behavior measured non-invasively. 173-177 - Stephen J. Cox, Richard W. Harvey, Yuxuan Lan, Jacob L. Newman, Barry-John Theobald:
The challenge of multispeaker lip-reading. 179-184 - Sanaul Haq, Philip J. B. Jackson, James D. Edge:
Audio-visual feature selection and reduction for emotion classification. 185-190 - Takaaki Kuratate:
Text-to-AV synthesis system for Thinking Head Project. 191-194 - Katja Madany, Sascha Fagel:
Objective and perceptual evaluation of parameterizations of 3d motion captured speech data. 195-198 - Marc Sato, Emilie Troille, Lucie Ménard, Marie-Agnès Cathiard, Vincent L. Gracco:
Listening while speaking: new behavioral evidence for articulatory-to-auditory feedback projections. 199-204 - Magnus Alm, Dawn M. Behne:
Age-related experience in audio-visual speech perception. 205-208 - Þórir Harðarson, Hans-Heinrich Bothe:
A model for the dynamics of articulatory lip movements. 209-214 - Zdenek Krnoul, Patrik Rostík, Milos Zelezný:
Evaluation of synthesized sign and visual speech by deaf. 215-218 - Erol Ozgur, Mustafa Berkay Yilmaz, Harun Karabalkan, Hakan Erdogan, Mustafa Unel:
Lip segmentation using adaptive color space training. 219-222 - Shi-Lin Wang, Alan Wee-Chung Liew:
Static and dynamic lip feature analysis for speaker verification. 223-227 - James D. Edge, Adrian Hilton, Philip J. B. Jackson:
Parameterisation of 3d speech lip movements. 229-234 - Roland Göcke, Akshay Asthana:
A comparative study of 2d and 3d lip tracking methods for AV ASR. 235-240
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.