It is our great pleasure to welcome you to Seattle and to the 2015 ACM International Conference on Multimodal Interaction. This year's conference continues its tradition of being the premier forum for presenting research results and experience studies on multimodal human-human and human-computer interaction. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development. ICMI 2015 features a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), demonstrations, exhibits, grand challenges, and doctoral symposium papers. It is followed by a day with five workshops.
The ICMI 2015 call for long and short papers attracted 127 paper submissions (84 in the long category and 43 in the short category). The papers were reviewed by a program committee led by 3 Program Chairs and composed of 25 Senior Program Committee members and a large number of technical reviewers. In a rebuttal process, the authors had the opportunity to clarify any misunderstandings and respond to questions raised in the reviews and meta-reviews. After the rebuttal phase, the program chairs held several remote meetings to discuss the papers. As a result, 24 papers were accepted for oral presentation and 28 papers were accepted for poster presentation. The acceptance rate is 19% for oral presentations and 41% overall, for short and long papers combined.
This year, the conference will host two invited keynote speeches from thought leaders in industry and academia. They are:
Sharing Representations for Long Tail Computer Vision Problems, Dr. Samy Bengio, Google (USA),
Interaction Studies with Social Robots, Prof. Kerstin Dautenhahn, University of Hertfordshire (UK).
In addition, Dr. Eric Horvitz, Microsoft (USA), the recipient of the ICMI 2015 Sustained Accomplishment Award, presented by the ICMI Advisory Board, will give a plenary talk entitled Connections. We encourage attendees to attend the keynote presentations. These valuable and insightful talks can and will guide us to a better understanding of the future of multimodal interaction.
The main ICMI conference program includes an exciting Demonstration session co-chaired by Hrvoje Benko (Microsoft, USA) and Stefan Scherer (University of Southern California, USA) that will showcase innovative implementations, systems, and technologies that incorporate multimodal interaction. The demonstration session will include 12 refereed demonstrations (out of 17 submitted), plus 5 demonstrations that accompany accepted main track papers.
The Doctoral Consortium is by now a traditional ICMI satellite event which takes place on the first day of the conference and extends our commitment to the next generation of researchers. This year, the event is co-chaired by Carlos Busso (University of Texas at Dallas, USA) and Vidhyasaharan Sethu (University of New South Wales, Australia). In this special session, a highly-accomplishedmentor team and senior PhD students, selected via a rigorous review process by the Doctoral Consortium Program Committee, as well as by peer reviews from other applicants, gather to discuss research plans and progress of each student. From among 30 applications, 14 students were accepted for participation. The accepted students receive a travel grant and registration waiver to attend both the Doctoral Consortium event, and the main conference. The organizers thank the U.S. National Science Foundation (award IIS 1346655 and IIS 1443097) and conference sponsors for the financial support that makes this possible.
The Multimodal Grand Challenges were introduced to ICMI in 2012. They are designed to stimulate the community with standardized corpora competitions, and new research questions. This year's challenges are co-chaired by Cosmin Munteanu (University of Toronto, Canada) and Marcelo Worsley (Stanford University, USA), and include: the Fourth Multimodal Learning Analytics Challenge (MLA), the Third Emotion Recognition In The Wild Challenge (EMOTIW), and the Recognition of Social Touch Gestures Challenge. All three Grand Challenges will be presented on Monday, November 9th, and an overview and poster session will take place on Thursday, November 12th, during the main conference.
The ICMI workshop program was co-chaired this year by Jean-Marc Odobez (IDIAP, Switzerland)and Hayley Hung (Technical University of Delft, Netherlands). Five workshops will be held after the main conference, on Friday, November 13th. They are: the 1st Workshop on Modeling INTERPERsonal SynchrONy And infLuence (INTERPERSONAL), the ACM Workshop on Multimodal Deception Detection (WMDD), the International Workshop on Emotion Representations and Modelling for Companion Systems (ERM4CT), the 1st International Workshop on Advancements in Social Signal Processing for Multimodal Interaction (ASSP4MI), and the Workshop on Developing Portable & Context-aware Multimodal Applications for Connected Devices using W3C Multimodal Architecture (sponsored workshop).
Outstanding paper awards have also been a tradition at ICMI. This year, in addition to the traditional Outstanding Paper award and Outstanding Student Paper award, the ICMI board also established an Honorable Mention award. The Program Chairs considered the top-ranked paper submissions based on the reviews and meta-reviews and identified a set of nominations for the awards. A paper award committee was created with internationally renowned researchers in multimodal interaction. The committee reviewed the nominated papers carefully and selected the recipients of these awards, which will be announced at the banquet.
The Sustained Accomplishment Award is presented to a scientist who has made innovative and longlasting contributions to our field. The award acknowledges an individual who has demonstrated vision in shaping the field, with a sustained record of research that has influenced the work of others. This year's award is presented to Dr. Eric Horvitz, Microsoft (USA), for his many contributions to multimodal interaction. The ICMI Community Service Award is given to an individual who has made organizational contributions that have had a major impact on the ICMI community and its annual events. Nominees will have made contributions to build and diversify our community over a period of five years or longer, including the creation of mechanisms to promote intellectual exchange and multidisciplinary or international alliances, and the expansion of opportunities for student training and participation. The 2015 ICMI Community Service award is presented to Dr. Kenji Mase, Nagoya University (Japan). Finally, this year, the ICMI Advisory Board will present two Ten Year Technical Impact paper awards at the banquet.
Exploring Turn-taking Cues in Multi-party Human-robot Discussions about Objects
In this paper, we present a dialog system that was exhibited at the Swedish National Museum of Science and Technology. Two visitors at a time could play a collaborative card sorting game together with the robot head Furhat, where the three players ...
Visual Saliency and Crowdsourcing-based Priors for an In-car Situated Dialog System
This paper addresses issues in situated language understanding in a moving car. We propose a reference resolution method to identify user queries about specific target objects in their surroundings. We investigate methods of predicting which target ...
Leveraging Behavioral Patterns of Mobile Applications for Personalized Spoken Language Understanding
Spoken language interfaces are appearing in various smart devices (e.g. smart-phones, smart-TV, in-car navigating systems) and serve as intelligent assistants (IAs). However, most of them do not consider individual users' behavioral profiles and ...
Who's Speaking?: Audio-Supervised Classification of Active Speakers in Video
Active speakers have traditionally been identified in video by detecting their moving lips. This paper demonstrates the same using spatio-temporal features that aim to capture other cues: movement of the head, upper body and hands of active speakers. ...
Cited By
-
刘 艳 (2022). Research on Vegetation Restoration of Alpine Grassland Linear Engineering Construction Sites in Plateau, Journal of Oil and Gas Technology, 10.12677/JOGT.2022.443032, 44:03, (248-253),
- Lugrin B, Pelachaud C and Traum D (2022). The Handbook on Socially Interactive Agents, 10.1145/3563659, Online publication date: 27-Oct-2022.
- Lystbæk M, Rosenberg P, Pfeuffer K, Grønbæk J and Gellersen H (2022). Gaze-Hand Alignment, Proceedings of the ACM on Human-Computer Interaction, 6:ETRA, (1-18), Online publication date: 13-May-2022.
-
Ermis B, Ernst P, Stein Y and Zappella G (2020). Learning to Rank in the Position Based Model with Bandit Feedback CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, 10.1145/3340531.3412723, 9781450368599, (2405-2412), Online publication date: 19-Oct-2020.
-
周 巍 (2020). Practice of Intelligent Sound Wave Leak Detection System in Long Distance Natural Gas Pipeline, Journal of Sensor Technology and Application, 10.12677/JSTA.2020.83011, 08:03, (96-106),
-
陈 帅 (2020). Acoustic Wave Leakage Monitoring Method Based on Sequential Probability, Journal of Sensor Technology and Application, 10.12677/JSTA.2020.83012, 08:03, (107-114),
-
Zong Y, Zheng W, Huang X, Yan K, Yan J and Zhang T (2016). Emotion recognition in the wild via sparse transductive transfer linear discriminant analysis, Journal on Multimodal User Interfaces, 10.1007/s12193-015-0210-7, 10:2, (163-172), Online publication date: 1-Jun-2016.
-
Cooney M and Menezes M (2018). Design for an Art Therapy Robot: An Explorative Review of the Theoretical Foundations for Engaging in Emotional and Creative Painting with a Robot, Multimodal Technologies and Interaction, 10.3390/mti2030052, 2:3, (52)
-
Mills C, Gregg J, Bixler R and D’Mello S (2020). Eye-Mind reader: an intelligent reading interface that promotes long-term comprehension by detecting and responding to mind wandering, Human–Computer Interaction, 10.1080/07370024.2020.1716762, (1-27)
Index Terms
- Proceedings of the 2015 ACM on International Conference on Multimodal Interaction