Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3486011.3486472acmotherconferencesArticle/Chapter ViewAbstractPublication PagesteemConference Proceedingsconference-collections
research-article

Emotional AI in Healthcare: a pilot architecture proposal to merge emotion recognition tools

Published: 20 December 2021 Publication History
  • Get Citation Alerts
  • Abstract

    The use of emotional artificial intelligence (EAI) looks promising and is continuing to improve during the last years. However, in order to effectively use EAI to help in the diagnose and treat health conditions there are still significant challenges to be tackled. Because EAI is still under development, one of the most important challenges is to integrate the technology into the health provision process. In this sense, it is important to complement EAI technologies with expert supervision, and to provide health professionals with the necessary tools to make the best of EAI without a deep knowledge of the technology. The present work aims to provide an initial architecture proposal for making use of different available technologies for emotion recognition, where their combination could enhance emotion detection. The proposed architecture is based on an evolutionary approach so to be integrated in digital health ecosystems, so new modules can be easily integrated. In addition, internal data exchange utilizes Robot Operating System (ROS) syntax, so it can also be suitable for physical agents.

    References

    [1]
    García-Holgado, A., Marcos-Pablos, S., & García-Peñalvo, F. J. (2019). A Model to Define an eHealth Technological Ecosystem for Caregivers. In Á. Rocha, H. Adeli, L. P. Reis, & S. Costanzo (Eds.), New Knowledge in Information Systems and Technologies (pp. 422–432). Springer International Publishing. https://doi.org/10.1007/978-3-030-16187-3_41
    [2]
    Marcos-Pablos, S., García-Holgado, A., & García-Peñalvo, F. J. (2019). Modelling the business structure of a digital health ecosystem. Proceedings of the Seventh International Conference on Technological Ecosystems for Enhancing Multiculturality, 838–846. https://doi.org/10.1145/3362789.3362949
    [3]
    Picard, R. W. (1995). Affective Computing (Technical Report No. 321). M.I.T Media Laboratory Perceptual Computing Section.
    [4]
    Salovey, P., & Mayer, J. D. (1990). Emotional Intelligence. Imagination, Cognition and Personality, 9(3), 185–211. https://doi.org/10.2190/DUGG-P24E-52WK-6CDG
    [5]
    Lieskovská, E., Jakubec, M., Jarina, R., & Chmulík, M. (2021). A Review on Speech Emotion Recognition Using Deep Learning and Attention Mechanism. Electronics, 10(10), 1163. https://doi.org/10.3390/electronics10101163
    [6]
    Dupré, D., Krumhuber, E. G., Küster, D., & McKeown, G. J. (2020). A performance comparison of eight commercially available automatic classifiers for facial affect recognition. PLOS ONE, 15(4), e0231968. https://doi.org/10.1371/journal.pone.0231968
    [7]
    Deshmukh, R. S., & Jagtap, V. (2017). A survey: Software API and database for emotion recognition. 2017 International Conference on Intelligent Computing and Control Systems (ICICCS), 284–289. https://doi.org/10.1109/ICCONS.2017.8250727
    [8]
    Bhattacharjee, A., Pias, T., Ahmad, M., & Rahman, A. (2018). On the Performance Analysis of APIs Recognizing Emotions from Video Images of Facial Expressions. 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), 223–230. https://doi.org/10.1109/ICMLA.2018.00040
    [9]
    Ekman, P., & Friesen, W. (2002). Facial action coding system: A technique for the measurement of facial movement. San Francisco, CA: Consulting Psychologists Press.
    [10]
    Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161–1178. https://doi.org/10.1037/h0077714
    [11]
    McDuff, D., Mahmoud, A., Mavadati, M., Amr, M., Turcot, J., & Kaliouby, R. el. (2016). AFFDEX SDK: A Cross-Platform Real-Time Multi-Face Expression Recognition Toolkit. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 3723–3726. https://doi.org/10.1145/2851581.2890247
    [12]
    Del Sole, A. (2018). Introducing Microsoft Cognitive Services. In A. Del Sole (Ed.), Microsoft Computer Vision APIs Distilled: Getting Started with Cognitive Services (pp. 1–4). Apress. https://doi.org/10.1007/978-1-4842-3342-9_1
    [13]
    Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T. B., & Leibs, J. (2009). ROS: an open-source Robot Operating System. Proc. ICRA Open-Source Softw. Workshop. International Conference on Robotics and Automation (ICRA).
    [14]
    Hepach, R., Kliemann, D., Grüneisen, S., Heekeren, H. R., & Dziobek, I. (2011). Conceptualizing Emotions Along the Dimensions of Valence, Arousal, and Communicative Frequency – Implications for Social-Cognitive Tests and Training Tools. Frontiers in Psychology, 2, 266. https://doi.org/10.3389/fpsyg.2011.00266
    [15]
    Paltoglou, G., & Thelwall, M. (2013). Seeing Stars of Valence and Arousal in Blog Posts. IEEE Transactions on Affective Computing, 4(01), 116–123. https://doi.org/10.1109/T-AFFC.2012.36
    [16]
    Olszanowski, M., Pochwatko, G., Kuklinski, K., Scibor-Rylski, M., Lewinski, P., & Ohme, R. K. (2015). Warsaw set of emotional facial expression pictures: A validation study of facial display photographs. Frontiers in Psychology, 5. https://doi.org/10.3389/fpsyg.2014.01516
    [17]
    Petrantonakis, P. C., & Hadjileontiadis, L. J. (2010). Emotion Recognition from Brain Signals Using Hybrid Adaptive Filtering and Higher Order Crossings Analysis. IEEE Transactions on Affective Computing, 1(2), 81–97. https://doi.org/10.1109/T-AFFC.2010.7
    [18]
    Kowalczuk, Z., & Czubenko, M. (2016). Computational Approaches to Modeling Artificial Emotion – An Overview of the Proposed Solutions. Frontiers in Robotics and AI, 3. https://doi.org/10.3389/frobt.2016.00021
    [19]
    Mathur, A., Isopoussu, A., Kawsar, F., Smith, R., Lane, N. D., & Berthouze, N. (2018). On Robustness of Cloud Speech APIs: An Early Characterization. Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, 1409–1413. https://doi.org/10.1145/3267305.3267505
    [20]
    Khanal, S. R., Barroso, J., Lopes, N., Sampaio, J., & Filipe, V. (2018). Performance analysis of Microsoft's and Google's Emotion Recognition API using pose-invariant faces. Proceedings of the 8th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-Exclusion, 172–178. https://doi.org/10.1145/3218585.3224223

    Cited By

    View all
    • (2024)Leveraging AI to Personalize and Humanize Online LearningHumanizing Online Teaching and Learning in Higher Education10.4018/979-8-3693-0762-5.ch011(224-246)Online publication date: 29-Mar-2024
    • (2024)Sentient libraries: empowering user expeditions with emotional artificial intelligenceLibrary Hi Tech News10.1108/LHTN-01-2024-0001Online publication date: 16-Jul-2024
    • (2023)Emotion Recognition from Multimodal Data: a machine learning approach combining classical and hybrid deep architecturesResearch on Biomedical Engineering10.1007/s42600-023-00293-939:3(613-638)Online publication date: 2-Aug-2023

    Index Terms

    1. Emotional AI in Healthcare: a pilot architecture proposal to merge emotion recognition tools
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Information & Contributors

            Information

            Published In

            cover image ACM Other conferences
            TEEM'21: Ninth International Conference on Technological Ecosystems for Enhancing Multiculturality (TEEM'21)
            October 2021
            823 pages
            ISBN:9781450390668
            DOI:10.1145/3486011
            • Editors:
            • Marc Alier,
            • David Fonseca
            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            Published: 20 December 2021

            Permissions

            Request permissions for this article.

            Check for updates

            Author Tags

            1. Digital Ecosystems
            2. Emotional AI
            3. Healthcare
            4. Software Architecture

            Qualifiers

            • Research-article
            • Research
            • Refereed limited

            Conference

            TEEM'21

            Acceptance Rates

            Overall Acceptance Rate 496 of 705 submissions, 70%

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • Downloads (Last 12 months)58
            • Downloads (Last 6 weeks)6
            Reflects downloads up to 10 Aug 2024

            Other Metrics

            Citations

            Cited By

            View all
            • (2024)Leveraging AI to Personalize and Humanize Online LearningHumanizing Online Teaching and Learning in Higher Education10.4018/979-8-3693-0762-5.ch011(224-246)Online publication date: 29-Mar-2024
            • (2024)Sentient libraries: empowering user expeditions with emotional artificial intelligenceLibrary Hi Tech News10.1108/LHTN-01-2024-0001Online publication date: 16-Jul-2024
            • (2023)Emotion Recognition from Multimodal Data: a machine learning approach combining classical and hybrid deep architecturesResearch on Biomedical Engineering10.1007/s42600-023-00293-939:3(613-638)Online publication date: 2-Aug-2023

            View Options

            Get Access

            Login options

            View options

            PDF

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format.

            HTML Format

            Media

            Figures

            Other

            Tables

            Share

            Share

            Share this Publication link

            Share on social media