default search action
Journal on Multimodal User Interfaces, Volume 2
Volume 2, Number 1, July 2008
- Jean Vanderdonckt:
Editorial. 1 - Brandon Paulson, Tracy Hammond:
MARQS: retrieving sketches learned from a single example using a dual-classifier. 3-11 - Beryl Plimmer:
Experiences with digital pen, keyboard and mouse usability. 13-23 - Robbie Schaefer, Wolfgang Müller:
Assessment of a multimodal interaction and rendering system against established design principles. 25-41 - Julián García, José Pascual Molina, Diego Martínez, Arturo S. García, Pascual González, Jean Vanderdonckt:
Prototyping and evaluating glove-based multimodal interfaces. 43-52 - Marie-Luce Bourguet, Jaeseung Chang:
Design and usability evaluation of multimodal interaction with finite state machines: a conceptual framework. 53-60 - Daniel Schreiber, Melanie Hartmann, Felix Flentge, Max Mühlhäuser, Manuel Görtz, Thomas Ziegert:
Web based evaluation of proactive user interfaces. 61-72
Volume 2, Number 2, September 2008
- Bülent Sankur:
Guest Editorial of the special eNTERFACE issue. 73-74 - Albert Ali Salah, Ramon Morros, Jordi Luque, Carlos Segura, Javier Hernando, Onkar Ambekar, Ben A. M. Schouten, Eric J. Pauwels:
Multimodal identification and localization of users in a smart environment. 75-91 - Ferda Ofli, Yasemin Demir, Yücel Yemez, Engin Erzin, A. Murat Tekalp, Koray Balci, Idil Kizoglu, Lale Akarun, Cristian Canton-Ferrer, Joëlle Tilmanne, Elif Bozkurt, A. Tanju Erdem:
An audio-driven dancing avatar. 93-103 - Savvas Argyropoulos, Konstantinos Moustakas, Alexey Karpov, Oya Aran, Dimitrios Tzovaras, Thanos Tsakiris, Giovanna Varni, Byungjun Kwon:
Multimodal user interface for the communication of the disabled. 105-116 - Oya Aran, Ismail Ari, Lale Akarun, Erinç Dikici, Siddika Parlak, Murat Saraçlar, Pavel Campr, Marek Hrúz:
Speech and sliding text aided sign retrieval from hearing impaired sign news videos. 117-131 - Nicolas D'Alessandro, Onur Babacan, Baris Bozkurt, Thomas Dubuisson, Andre Holzapfel, Loïc Kessous, Alexis Moinet, Maxime Vlieghe:
RAMCESS 2.X framework - expressive voice analysis for realtime and accurate synthesis of singing. 133-144
Volume 2, Numbers 3-4, December 2008
- Ju-Hwan Lee, Charles Spence:
Feeling what you hear: task-irrelevant sounds modulate tactile perception delivered via a touch screen. 145-156 - Dongmei Jiang, Ilse Ravyse, Hichem Sahli, Werner Verhelst:
Speech driven realistic mouth animation based on multi-modal unit selection. 157-169 - Anton Batliner, Christian Hacker, Elmar Nöth:
To talk or not to talk with a computer. 171-186 - Matei Mancas, Donald Glowinski, Gualtiero Volpe, Antonio Camurri, Pierre Bretéché, Jonathan Demeyer, Thierry Ravet, Paolo Coletta:
Real-time motion attention and expressive gesture interfaces. 187-198 - Shuichi Sakamoto, Akihiro Tanaka, Komi Tsumura, Yôiti Suzuki:
Effect of speed difference between time-expanded speech and moving image of talker's face on word intelligibility. 199-203 - Elizabeth S. Redden, Linda R. Elliott, Rodger A. Pettitt, Christian B. Carstens:
A tactile option to reduce robot controller size. 205-216 - Georgios Goudelis, Anastasios Tefas, Ioannis Pitas:
Emerging biometric modalities: a survey. 217-235
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.