Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2818346.2830598acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

The Grenoble System for the Social Touch Challenge at ICMI 2015

Published: 09 November 2015 Publication History
  • Get Citation Alerts
  • Abstract

    New technologies and especially robotics is going towards more natural user interfaces. Works have been done in different modality of interaction such as sight (visual computing), and audio (speech and audio recognition) but some other modalities are still less researched. The touch modality is one of the less studied in HRI but could be valuable for naturalistic interaction. However touch signals can vary in semantics. It is therefore necessary to be able to recognize touch gestures in order to make human-robot interaction even more natural. We propose a method to recognize touch gestures. This method was developed on the CoST corpus and then directly applied on the HAART dataset as a participation of the Social Touch Challenge at ICMI 2015. Our touch gesture recognition process is detailed in this article to make it reproducible by other research teams. Besides features set description, we manually filtered the training corpus to produce 2 datasets. For the challenge, we submitted 6 different systems. A Support Vector Machine and a Random Forest classifiers for the HAART dataset. For the CoST dataset, the same classifiers are tested in two conditions: using all or filtered training datasets. As reported by organizers, our systems have the best correct rate in this year's challenge (70.91% on HAART, 61.34% on CoST). Our performances are slightly better that other participants but stay under previous reported state-of-the-art results.

    References

    [1]
    M. D. Cooney, S. Nishio, and H. Ishiguro, "Recognizing affection for a touch-based interaction with a humanoid robot," in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 1420--1427, IEEE, 2012.
    [2]
    A. Billard, A. Bonğlio, G. Cannata, P. Cosseddu, T. Dahl, K. Dautenhahn, F. Mastrogiovanni, G. Metta, L. Natale, B. Robins, et al., "The roboskin project: Challenges and results," in Romansy 19-Robot Design, Dynamics and Control, pp. 351--358, Springer, 2013.
    [3]
    H. Knight, R. Toscano, W. D. Stiehl, A. Chang, Y. Wang, and C. Breazeal, "Real-time social touch gesture recognition for sensate robots," in Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on, pp. 3715--3720, IEEE, 2009.
    [4]
    M. M. Jung, "Touching the Void Introducing CoST : a Corpus of Social Touch," in Proceedings of the 16th International Conference on Multimodal Interaction, pp. 1--8, 2014.
    [5]
    A. Flagg and K. MacLean, "Affective touch gesture recognition for a furry zoomorphic machine," in Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction, pp. 25--32, ACM, 2013.
    [6]
    W. Hillis, "Bounding box gesture recognition on a touch detecting interactive display," May 18 2010. US Patent 7,719,523.
    [7]
    M. D. Cooney, S. Nishio, and H. Ishiguro, "Importance of Touch for Conveying Affection in a Multimodal Interaction with a Small Humanoid Robot," International Journal of Humanoid Robotics, vol. 12, p. 1550002, 2015.
    [8]
    B. D. Argall and A. G. Billard, "A survey of tactile human-robot interactions," Robotics and Autonomous Systems, vol. 58, no. 10, pp. 1159--1176, 2010.
    [9]
    W. Abdulla, D. Chow, and G. Sin, "Cross-words reference template for dtw-based speech recognition systems," in TENCON 2003. Conference on Convergent Technologies for the Asia-Pacific Region, vol. 4, pp. 1576--1579 Vol.4, Oct 2003.
    [10]
    V. Niennattrakul, D. Srisai, and C. A. Ratanamahatana, "Shape-based template matching for time series data," Knowledge-Based Systems, vol. 26, pp. 1--8, 2012.
    [11]
    B. Hartmann and N. Link, "Gesture recognition with inertial sensors and optimized dtw prototypes," in Systems Man and Cybernetics (SMC), 2010 IEEE International Conference on, pp. 2102--2109, Oct 2010.
    [12]
    N. Gillian, R. B. Knapp, and S. O'Modhrain, "Recognition of multivariate temporal musical gestures using n-dimensional dynamic time warping," in Proc of the 11th Int'l conference on New Interfaces for Musical Expression, 2011.
    [13]
    H.-R. Lv, Z.-L. Lin, W.-J. Yin, and J. Dong, "Emotion recognition based on pressure sensor keyboards," in Multimedia and Expo, 2008 IEEE International Conference on, pp. 1089--1092, IEEE, 2008.
    [14]
    G. Ten Holt, M. Reinders, and E. Hendriks, "Multi-dimensional dynamic time warping for gesture recognition," in Thirteenth annual conference of the Advanced School for Computing and Imaging, 2007.
    [15]
    S. Uchida and H. Sakoe, "A monotonic and continuous two-dimensional warping based on dynamic programming," in icpr, p. 521, IEEE, 1998.
    [16]
    D. Vaufreydaz, W. Johal, and C. Combe, "Starting engagement detection towards a companion robot using multimodal features," Robotics and Autonomous Systems, pp., 2015.
    [17]
    S. van Wingerden, T. J. Uebbing, M. M. Jung, and M. Poel, "A neural network based approach to social touch classification," in Proceedings of the 2014 workshop on Emotion Recognition in the Wild Challenge and Workshop, pp. 7--12, ACM, 2014.
    [18]
    X. Pan, H. Zhao, Y. Zhou, C. Fan, W. Zou, Z. Ren, and X. Chen, "A preliminary study on the feature distribution of deceptive speech signals," Journal of Fiber Bioengineering and Informatics, vol. 8, no. 1, pp. 179--193, 2015.
    [19]
    P. Duhamel and M. Vetterli, "Fast fourier transforms: a tutorial review and a state of the art," Signal processing, vol. 19, no. 4, pp. 259--299, 1990.
    [20]
    J. Makhoul, "A fast cosine transform in one and two dimensions," Acoustics, Speech and Signal Processing, IEEE Transactions on, vol. 28, no. 1, pp. 27--34, 1980.
    [21]
    R. C. Gonzalez and R. E. Woods, "Digital image processing reading," MA: Addison-Wesley, 1992.
    [22]
    F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, "Scikit-learn: Machine learning in Python," Journal of Machine Learning Research, vol. 12, pp. 2825--2830, 2011.
    [23]
    P. Hanchuan, L. Fuhui, and D. Chris, "Feature Selection Based on Mutual Information : Criteria of Max-Dependency, Max-Relevance and Min-Redundancy," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1226--1238, 2005.
    [24]
    M. M. Jung, "Towards social touch intelligence: Developing a robust system for automatic touch recognition," in Proceedings of the 16th International Conference on Multimodal Interaction, ICMI '14, (New York, NY, USA), pp. 344--348, ACM, 2014.

    Cited By

    View all
    • (2024)Advancements in Tactile Hand Gesture Recognition for Enhanced Human-Machine Interaction2024 IEEE International Symposium on Robotic and Sensors Environments (ROSE)10.1109/ROSE62198.2024.10590799(01-08)Online publication date: 20-Jun-2024
    • (2023)Low-latency Classification of Social Haptic Gestures Using TransformersCompanion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3568294.3580059(137-141)Online publication date: 13-Mar-2023
    • (2023)Discerning Affect From Touch and Gaze During Interaction With a Robot PetIEEE Transactions on Affective Computing10.1109/TAFFC.2021.309489414:2(1598-1612)Online publication date: 1-Apr-2023
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '15: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction
    November 2015
    678 pages
    ISBN:9781450339124
    DOI:10.1145/2818346
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 09 November 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. gesture recognition
    2. multimodal perception
    3. touch challenge

    Qualifiers

    • Research-article

    Conference

    ICMI '15
    Sponsor:
    ICMI '15: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
    November 9 - 13, 2015
    Washington, Seattle, USA

    Acceptance Rates

    ICMI '15 Paper Acceptance Rate 52 of 127 submissions, 41%;
    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)9
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 10 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Advancements in Tactile Hand Gesture Recognition for Enhanced Human-Machine Interaction2024 IEEE International Symposium on Robotic and Sensors Environments (ROSE)10.1109/ROSE62198.2024.10590799(01-08)Online publication date: 20-Jun-2024
    • (2023)Low-latency Classification of Social Haptic Gestures Using TransformersCompanion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3568294.3580059(137-141)Online publication date: 13-Mar-2023
    • (2023)Discerning Affect From Touch and Gaze During Interaction With a Robot PetIEEE Transactions on Affective Computing10.1109/TAFFC.2021.309489414:2(1598-1612)Online publication date: 1-Apr-2023
    • (2023)Recognizing Social Touch Gestures using Optimized Class-weighted CNN-LSTM Networks2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)10.1109/RO-MAN57019.2023.10309595(2024-2029)Online publication date: 28-Aug-2023
    • (2023)Touch Technology in Affective Human–, Robot–, and Virtual–Human Interactions: A SurveyProceedings of the IEEE10.1109/JPROC.2023.3272780111:10(1333-1354)Online publication date: Oct-2023
    • (2022)Multitask Touch Gesture and Emotion Recognition Using Multiscale Spatiotemporal Convolutions With Attention MechanismIEEE Sensors Journal10.1109/JSEN.2022.318777622:16(16190-16201)Online publication date: 15-Aug-2022
    • (2022)Touch Gesture Recognition Using Spatiotemporal Fusion FeaturesIEEE Sensors Journal10.1109/JSEN.2021.309057622:1(428-437)Online publication date: 1-Jan-2022
    • (2020)Touch Recognition with Attentive End-to-End ModelProceedings of the 2020 International Conference on Multimodal Interaction10.1145/3382507.3418834(694-698)Online publication date: 22-Oct-2020
    • (2020)Extending SpArSe: Automatic Gesture Recognition Architectures for Embedded Devices2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA)10.1109/ICMLA51294.2020.00011(7-12)Online publication date: Dec-2020
    • (2019)Online Social Touch Pattern Recognition with Multi-modal-sensing Modular Tactile Interface2019 16th International Conference on Ubiquitous Robots (UR)10.1109/URAI.2019.8768706(271-277)Online publication date: Jun-2019
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media