Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
Skip header Section
Applied Affective ComputingJanuary 2022
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
ISBN:978-1-4503-9590-8
Published:07 February 2022
Pages:
308
Appears In:
ACMACM Books
Skip Bibliometrics Section
Reflects downloads up to 08 Feb 2025Bibliometrics
Skip Abstract Section
Abstract

Affective computing is a nascent field situated at the intersection of artificial intelligence with social and behavioral science. It studies how human emotions are perceived and expressed, which then informs the design of intelligent agents and systems that can either mimic this behavior to improve their intelligence or incorporate such knowledge to effectively understand and communicate with their human collaborators. Affective computing research has recently seen significant advances and is making a critical transformation from exploratory studies to real-world applications in the emerging research area known as applied affective computing.

This book offers readers an overview of the state-of-the-art and emerging themes in affective computing, including a comprehensive review of the existing approaches to affective computing systems and social signal processing. It provides in-depth case studies of applied affective computing in various domains, such as social robotics and mental well-being. It also addresses ethical concerns related to affective computing and how to prevent misuse of the technology in research and applications. Further, this book identifies future directions for the field and summarizes a set of guidelines for developing next-generation affective computing systems that are effective, safe, and human-centered.

For researchers and practitioners new to affective computing, this book will serve as an introduction to the field to help them in identifying new research topics or developing novel applications. For more experienced researchers and practitioners, the discussions in this book provide guidance for adopting a human-centered design and development approach to advance affective computing

References

  1. M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 308–318. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. P. Abbeel and A. Y. Ng. 2004. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the Twenty-First International Conference on Machine Learning. 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. J. Abdi, A. Al-Hindawi, T. Ng, and M. P. Vizcaychipi. 2018. Scoping review on the use of socially assistive robot technology in elderly care. BMJ Open 8, 2, e018815. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  4. A. Abdul, J. Vermeulen, D. Wang, B. Y. Lim, and M. Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–18. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. Abdullah, E. L. Murnane, M. Matthews, M. Kay, J. A. Kientz, G. Gay, and T. Choudhury. 2016. Cognitive rhythms: Unobtrusive and continuous sensing of alertness using a mobile phone. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp’16. Association for Computing Machinery, New York, NY, 178–189. ISBN 9781450344616. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. E. Acar, F. Hopfgartner, and S. Albayrak. 2014. Understanding affective content of music videos through learned representations. In International Conference on Multimedia Modeling. Springer, 303–314. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. C. Adam and B. Gaudou. 2016. BDI agents in social simulations: A survey. Knowl. Eng. Rev. 31, 3, 207–238. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  8. N. R. Adam and J. C. Worthmann. 1989. Security-control methods for statistical databases: A comparative study. ACM Comput. Surv. 21, 4, 515–556. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. A. T. Adams, J. Costa, M. F. Jung, and T. Choudhury. 2015. Mindless computing: Designing technologies to subtly influence behavior. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 719–730. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. F. Adib, H. Mao, Z. Kabelac, D. Katabi, and R. C. Miller. 2015. Smart homes that monitor breathing and heart rate. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI’15. Association for Computing Machinery, New York, NY, 837–846. ISBN 9781450331456. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R. Adolphs and D. J. Anderson. 2018. The Neuroscience of Emotion: A New Synthesis. Princeton University Press. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  12. R. Adolphs and D. Andler. 2018. Investigating emotions as functional states distinct from feelings. Emot. Rev. 10, 3, 191–201. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  13. Affectiva. 2020. Affectiva. https://www.affectiva.com/.Google ScholarGoogle Scholar
  14. S. Afzal and P. Robinson. 2014. Emotion data collection and its implications for affective computing. In The Oxford Handbook of Affective Computing. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  15. R. Agrawal and R. Srikant. 2000. Privacy-preserving data mining. In Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data. 439–450. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. A. Aguilera, E. Bruehlman-Senecal, O. Demasi, and P. Avila. 2017. Automated text messaging as an adjunct to cognitive behavioral therapy for depression: A clinical trial. J. Med. Internet Res. 19, 5, e148. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  17. N. Aharony, W. Pan, C. Ip, I. Khayal, and A. Pentland. December. 2011. Social fMRI: Investigating and shaping social mechanisms in the real world. Pervasive Mob. Comput. 7, 6, 643–659. .Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. B. Akçay and K. Oğuz. 2020. Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers. Speech Commun. 116, 56–76. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. L. Al-Barrak, E. Kanjo, and E. M. Younis. 2017. NeuroPlace: Categorizing urban places according to mental states. PLoS One 12, 9. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  20. L. Al-Husain, E. Kanjo, and A. Chamberlain. September. 2013. Sense of space: Mapping physiological emotion response in urban space. In Proceedings of the 2013 ACM Conference on Pervasive and Ubiquitous Computing Adjunct Publication, UbiComp’13. Adjunct. Association for Computing Machinery, Zurich, Switzerland, 1321–1324. ISBN 978-1-4503-2215-7. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. F. Q. Al-Khalidi, R. Saatchi, D. Burke, H. Elphick, and S. Tan. 2011. Respiration rate monitoring methods: A review. Pediatr. Pulmonol. 46, 6, 523–529. DOI: . ISSN 87556863.Google ScholarGoogle ScholarCross RefCross Ref
  22. F. Alam and G. Riccardi. 2014. Predicting personality traits using multimodal information. In Proceedings of the 2014 ACM Multimedia on Workshop on Computational Personality Recognition. ACM, 15–18. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. S. M. Alarcao and M. J. Fonseca. July. 2019. Emotions recognition using EEG signals: A survey. IEEE Trans. Affect. Comput. 10, 3, 374–393. ISSN 1949-3045. https://ieeexplore.ieee.org/document/7946165/. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. M. R. Ali, T. Sen, B. Kane, S. Bose, T. Carroll, R. Epstein, L. K. Schubert, and E. Hoque. 2021. Novel computational linguistic measures, dialogue system and the development of SOPHIE: Standardized online patient for healthcare interaction education. IEEE Trans. Affect. Comput. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. A. Aljanaki, Y.-H. Yang, and M. Soleymani. March. 2017. Developing a benchmark for emotional analysis of music. PLoS One 12, 3, e0173392. ISSN 1932-6203. https://dx.plos.org/10.1371/journal.pone.0173392. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  26. M. Allen. 2017. The SAGE Encyclopedia of Communication Research Methods. SAGE Publications. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  27. C. O. Alm, D. Roth, and R. Sproat. 2005. Emotions from text: Machine learning for text-based emotion prediction. In HLT/EMNLP 2005—Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. T. R. Almaev, A. Yüce, A. Ghitulescu, and M. F. Valstar. 2013. Distribution-based iterative pairwise classification of emotions in the wild using LGBP-TOP. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction. 535–542. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. I. Alvarez, J. Healey, and E. Lewis. 2019. The SKYNIVI experience: Evoking startle and frustration in dyads and single drivers. In 2019 IEEE Intelligent Vehicles Symposium (IV). 76–81. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. M. K. Ameko, M. L. Beltzer, L. Cai, M. Boukhechba, B. A. Teachman, and L. E. Barnes. September. 2020. Offline contextual multi-armed bandits for mobile health interventions: A case study on emotion regulation. Fourteenth ACM Conference on Recommender Systems. 249–258. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. N. Anand and P. Verma. 2015. Convoluted Feelings: Convolutional and Recurrent Nets for Detecting Emotion from Audio Data. In Technical Report. Stanford University. http://vision.stanford.edu/teaching/cs231n/reports/2015/pdfs/Cs_231n_paper.pdf.Google ScholarGoogle Scholar
  32. J. Andreoni and B. D. Bernheim. 2009. Social image and the 50–50 norm: A theoretical and experimental analysis of audience effects. Econometrica 77, 5, 1607–1636. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  33. D. Aneja, D. McDuff, and S. Shah. 2019. A high-fidelity open embodied avatar with lip syncing and expression capabilities. In 2019 International Conference on Multimodal Interaction. 69–73. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. R. N. R. Ariffin and R. K. Zahari. December. 2013. Perceptions of the urban walking environments. Procedia Soc. Behav. Sci. 105, 589–597. ISSN 18770428. https://linkinghub.elsevier.com/retrieve/pii/S1877042813044376. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  35. S. Arora and P. Doshi. 2018. A survey of inverse reinforcement learning: Challenges, methods and progress. arXiv preprint arXiv:1806.06877.Google ScholarGoogle Scholar
  36. J. Atkinson and D. Campos. 2016. Improving BCI-based emotion recognition by combining EEG feature selection and kernel classifiers. Expert Syst. Appl. 47, 35–41. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. P. K. Atrey, M. A. Hossain, A. El Saddik, and M. S. Kankanhalli. November. 2010. Multimodal fusion for multimedia analysis: A survey. Multimed. Syst. 16, 6, 345–379. ISSN 09424962. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. M. Augstein, E. Herder, and W. Wörndl. 2019. Personalized Human–Computer Interaction. Walter de Gruyter GmbH & Co KG.Google ScholarGoogle Scholar
  39. E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J.-F. Bonnefon, and I. Rahwan. 2018. The moral machine experiment. Nature 563, 7729, 59–64. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  40. S. Bafna. July. 2016. Space syntax: A brief introduction to its logic and analytical techniques. Environ. Behav. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  41. S. C. Bagui. 2005. Combining pattern classifiers: Methods and algorithms. Technometrics 47, 517–518. ISSN 0040-1706. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  42. R. A. Baksh, S. Abrahams, B. Auyeung, and S. E. MacPherson. 2018. The Edinburgh Social Cognition Test (ESCoT): Examining the effects of age on a new measure of theory of mind and social norm understanding. PLoS One 13, 4, 1–16. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  43. T. Baltrušaitis, C. Ahuja, and L.-P. Morency. 2018. Multimodal machine learning: A survey and taxonomy. IEEE Trans. Pattern Anal. Mach. Intell. 41, 2, 423–443. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. T. Baltrušaitis, N. Banda, and P. Robinson. 2013. Dimensional affect recognition using continuous conditional random fields. In 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). IEEE, 1–8. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  45. T. Baltrušaitis, P. Robinson, and L.-P. Morency. 2016. OpenFace: An open source facial behavior analysis toolkit. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 1–10. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  46. R. Banse and K. Scherer. 1996. Acoustic profiles in vocal emotion expression. J. Pers. Soc. Psychol. 70, 3, 32–41. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  47. T. Bänziger, S. Patel, and K. R. Scherer. 2014. The role of perceived voice and speech characteristics in vocal emotion communication. J. Nonverbal Behav. 38, 1, 31–52. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  48. R. Barba, A. P. D. Madrid, and J. G. Boticario. January. 2015. Development of an inexpensive sensor network for recognition of sitting posture. Int. J. Distrib. Sens. Netw. 2015. ISSN 1550-1329. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  49. S. Baron-Cohen. 1996. Reading the mind in the face: A cross-cultural and developmental study. Vis. Cogn. 3, 1, 39–60. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  50. S. Baron-Cohen. 1997. How to build a baby that can read minds: Cognitive mechanisms in mindreading. In The Maladapted Mind: Classic Readings in Evolutionary Psychopathology. 207–239. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  51. S. Baron-Cohen and S. Wheelwright. 2004. The Empathy Quotient: An investigation of adults with Asperger Syndrome or high functioning autism, and normal sex differences. J. Autism Dev. Disord. 34, 2, 163–175. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  52. L. F. Barrett. 2009. The future of psychology: Connecting mind to brain. Perspect. Psychol. Sci. 4, 4, 326–339. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  53. L. F. Barrett. 2017. How Emotions Are Made: The Secret Life of the Brain. Houghton Mifflin Harcourt.Google ScholarGoogle Scholar
  54. L. F. Barrett and E. Bliss-Moreau. 2009. Affect as a psychological primitive. Adv. Exp. Soc. Psychol. 41, 167–218. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  55. L. F. Barrett, B. Mesquita, and M. Gendron. October. 2011. Context in emotion perception. Curr. Dir. Psychol. Sci. 20, 5, 286–290. ISSN 0963-7214. http://journals.sagepub.com/doi/10.1177/0963721411422522. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  56. L. F. Barrett, R. Adolphs, S. Marsella, A. M. Martinez, and S. D. Pollak. 2019. Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychol. Sci. Public Interest 20, 1–68. ISSN 21600031. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  57. M. S. Bartlett, J. C. Hager, P. Ekman, and T. J. Sejnowski. 1999. Measuring facial expressions by computer image analysis. Psychophysiology 36, 2, 253–263. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  58. C. Bartneck and J. Forlizzi. 2004. A design-centred framework for social human–robot interaction. In RO-MAN 2004—13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No. 04TH8759). IEEE, 591–594. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  59. C. Bartneck and M. J. Lyons. 2009. Facial expression analysis, modeling and synthesis: Overcoming the limitations of artificial intelligence with the art of the soluble. In Handbook of Research on Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence. IGI Global, 34–55. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  60. A. G. Barto and S. Mahadevan. January. 2003. Recent advances in hierarchical reinforcement learning. Discrete Event Dyn. Syst. 13, 1–2, 41–77. ISSN 0924-6703. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. A. Batliner, S. Hantke, and B. W. Schuller. 2020. Ethics and good practice in computational paralinguistics. IEEE Trans. Affect. Comput. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  62. A. Batliner, S. Steidl, C. Hacker, and E. Nöth. 2008. Private emotions versus social interaction: A data-driven approach towards analysing emotion in speech. User Model. User-Adapt. Interact. 18, 1–2, 175–206. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. C. D. Batson. 2009. These things called empathy: Eight related but distinct phenomena. In J. Decety & W. Ickes (Eds.), The Social Neuroscience of Empathy. MIT Press, 3–15. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  64. C. D. Batson. 2011. Altruism in Humans. Oxford University Press. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  65. S. D. Baum. 2017. On the promotion of safe and socially beneficial artificial intelligence. AI Soc. 32, 4, 543–551. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Y. Baveye, E. Dellandrea, C. Chamaret, and L. Chen. 2015. LIRIS-ACCEDE: A video database for affective content analysis. IEEE Trans. Affect. Comput. 6, 1, 43–55. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. R. Beard, R. Das, R. W. Ng, P. K. Gopalakrishnan, L. Eerens, P. Swietojanski, and O. Miksik. 2018. Multi-modal sequence fusion via recursive attention for emotion recognition. In Proceedings of the 22nd Conference on Computational Natural Language Learning. 251–259. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  68. A. Beatty. 2010. How did it feel for you? Emotion, narrative, and the limits of ethnography. Am. Anthropol. 112, 3, 430–443. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  69. A. T. Beck, R. A. Steer, and M. G. Carbin. 1988. Psychometric properties of the Beck Depression Inventory: Twenty-five years of evaluation. Clin. Psychol. Rev. 8, 77–100. ISSN 02727358. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  70. A. Bellas, S. Perrin, B. Malone, K. Rogers, G. Lucas, E. Phillips, C. Tossell, and E. de Visser. 2020. Rapport building with social robots as a method for improving mission debriefing in human–robot teams. In 2020 Systems and Information Engineering Design Symposium (SIEDS). IEEE, 160–163. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  71. T. Belpaeme, J. Kennedy, A. Ramachandran, B. Scassellati, and F. Tanaka. 2018. Social robots for education: A review. Sci. Robot. 3, 21. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  72. A. Ben-Youssef, C. Clavel, S. Essid, M. Bilac, M. Chamoux, and A. Lim. 2017. UE-HRI: A new dataset for the study of user engagement in spontaneous human–robot interactions. In Proceedings of the 19th ACM International Conference on Multimodal Interaction. ACM, 464–472. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Y. Bengio, J. Louradour, R. Collobert, and J. Weston. 2009. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning. 41–48. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Ŝ. Beňuš, M. Trnka, E. Kuric, L. Marták, A. Gravano, J. Hirschberg, and R. Levitan. 2018. Prosodic entrainment and trust in human–computer interaction. In Proceedings of the 9th International Conference on Speech Prosody. International Speech Communication Association, Baixas, France, 220–224. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  75. A. Bera, T. Randhavane, and D. Manocha. June. 2019. The emotionally intelligent robot: Improving socially-aware human prediction in crowded environments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.Google ScholarGoogle Scholar
  76. A. Betella and P. F. M. J. Verschure. February. 2016. The affective slider: A digital self-assessment scale for the measurement of human emotions. PLoS One 11, 2, e0148037. ISSN 1932-6203. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  77. N. Bianchi-Berthouze and A. Kleinsmith. 2003. A categorical approach to affective gesture recognition. Conn. Sci. 15, 259–269. ISSN 09540091. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  78. E. A. Björling and E. Rose. 2019. Participatory research principles in human-centered design: Engaging teens in the co-design of a social robot. Multimodal Technol. Interact 3, 1, 8. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  79. M. M. Blattner and E. P. Glinert. 1996. Multimodal integration. IEEE Multimed. 3, 14–24. ISSN 1070986X. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. P. Bloom. 2017. Empathy and its discontents. Trends Cogn. Sci. 21, 1, 24–31. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  81. A. Bogomolov, B. Lepri, M. Ferron, F. Pianesi, and A. Pentland. 2014. Daily stress recognition from mobile phone data, weather conditions and individual traits. In MM 2014—Proceedings of the 2014 ACM Conference on Multimedia. 477–486. ISBN 9781450330633. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. R. M. Bond, C. J. Fariss, J. J. Jones, A. D. Kramer, C. Marlow, J. E. Settle, and J. H. Fowler. 2012. A 61-million-person experiment in social influence and political mobilization. Nature 489, 7415, 295–298. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  83. S. Bostok, A. D. Crosswell, A. A. Prather, and A. Steptoe. 2019. Mindfulness on-the-go: Effects of a mindfulness meditation app on work stress and well-being. J. Occup. Health Psychol. 24, 127–138. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  84. H. B. Bosworth and K. W. Schaie. 1997. The relationship of social environment, social networks, and health outcomes in the Seattle Longitudinal Study: Two analytical approaches. J. Gerontol. B Psychol. Sci. Soc. Sci. 52, 5, P197–P205. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  85. W. Boucsein. 2012. Electrodermal Activity (2nd. ed.). ISBN 9781461411260. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  86. S. Bozinovski. 1982. A self-learning system using secondary reinforcement. In R. Trappl (Ed.), Cybernetics and Systems. Elsevier Science Publishers, North Holland, 397–402.Google ScholarGoogle Scholar
  87. S. Bozinovski. 2003. Anticipation Driven Artificial Personality: Building on Lewin and Loehlin. Springer, Berlin, 133–150. ISBN 978-3-540-45002-3. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  88. S. Bozinovski. 2014. Modeling mechanisms of cognition–emotion interaction in artificial neural networks, since 1981. Procedia Comput. Sci. 41, 255–263. ISSN 1877-0509. 5th Annual International Conference on Biologically Inspired Cognitive Architectures, 2014 BICA. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  89. M. M. Bradley and P. J. Lang. 1994. Measuring emotion: The self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 25, 49–59. ISSN 00057916. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  90. C. Bradley and R. Wingfield. 2020. National Artificial Intelligence Strategies and Human Rights: A Review. Retrieved March 11, 2021, from https://fsi-live.s3.us-west-1.amazonaws.com/s3fs-public/national_artifical_intelligence_strategies_and_human_rights-a_review1.pdf.Google ScholarGoogle Scholar
  91. M. M. Bradley, B. N. Cuthbert, and P. J. Lang. April. 2010. Affect and the startle reflex. In Startle Modification. Cambridge University Press, 157–184. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  92. G. N. Bratman, C. B. Anderson, M. G. Berman, B. Cochran, S. D. Vries, J. Flanders, C. Folke, H. Frumkin, J. J. Gross, T. Hartig, P. H. Kahn, M. Kuo, J. J. Lawler, P. S. Levin, T. Lindahl, A. Meyer-Lindenberg, R. Mitchell, Z. Ouyang, J. Roe, L. Scarlett, J. R. Smith, M. v. d. Bosch, B. W. Wheeler, M. P. White, H. Zheng, and G. C. Daily. July. 2019. Nature and mental health: An ecosystem service perspective. Sci. Adv. 5, 7, eaax0903. ISSN 2375-2548. https://advances.sciencemag.org/content/5/7/eaax0903. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  93. C. Breazeal. 2002a. Regulation and entrainment in human–robot interaction. Int. J. Rob. Res. 21, 10–11, 883–902. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. C. L. Breazeal. 2002b. Designing Sociable Robots. MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. C. Breazeal. July. 2003. Emotion and sociable humanoid robots. Int. J. Hum. Comput. Stud. 59, 1–2, 119–155. ISSN 1071-5819. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. C. Breazeal, K. Dautenhahn, and T. Kanda. 2016. Social robotics. In Springer Handbook of Robotics. 1935–1972. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  97. M. Breidt, C. Wallraven, D. W. Cunningham, and H. Bulthoff. 2003. Facial animation based on 3D scans and motion capture. In SIGGRAPH, Vol. 3. http://hdl.handle.net/11858/00-001M-0000-0013-DC35-C.Google ScholarGoogle Scholar
  98. M. Bretan, G. Hoffman, and G. Weinberg. 2015. Emotionally expressive dynamic physical behaviors in robots. Int. J. Hum. Comput. Stud. 78, 1–16. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  99. J. Broekens, W. A. Kosters, and F. J. Verbeek. 2007. Affect, anticipation, and adaptation: Affect-controlled selection of anticipatory simulation in artificial adaptive agents. Adapt. Behav. 15, 4, 397–422. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. S. Bubeck and N. Cesa-Bianchi. 2012. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Found. Trends Mach. Learn. 5, 1–122. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  101. J. Buolamwini and T. Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency. PMLR, 77–91.Google ScholarGoogle Scholar
  102. Y. Burda, H. Edwards, D. Pathak, A. Storkey, T. Darrell, and A. A. Efros. 2019. Large-scale study of curiosity-driven learning. In 7th International Conference on Learning Representations (ICLR 2019). 1–17.Google ScholarGoogle Scholar
  103. J. K. Burgoon, N. Magnenat-Thalmann, M. Pantic, and A. Vinciarelli. 2017. Social Signal Processing. Cambridge University Press. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  104. B. L. Burke, C. W. Dunn, D. C. Atkins, and J. S. Phelps. 2004. The emerging evidence base for motivational interviewing: A meta-analytic and qualitative inquiry. J. Cogn. Psychother. 18, 4, 309–322. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  105. C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, and S. Narayanan. 2004. Analysis of emotion recognition using facial expressions, speech and multimodal information. In Proceedings of the 6th International Conference on Multimodal. https://dl.acm.org/citation.cfm?id=1027968.Google ScholarGoogle Scholar
  106. C. Busso, M. Bulut, C.-C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang, S. Lee, and S. S. Narayanan. 2008. IEMOCAP: Interactive emotional dyadic motion capture database. Lang. Resour. Eval. 42, 4, 335–359. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  107. J. T. Cacioppo and L. G. Tassinary. 1989. Inferring psychological significance from physiological signals. Am. Psychol. 45, 1, 16–28. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  108. R. A. Calvo and S. D’Mello. 2010. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Trans. Affect. Comput. 1, 1, 18–37. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  109. M. G. Calvo and P. J. Lang. September. 2004. Gaze patterns when looking at emotional pictures: motivationally biased attention. Motiv. Emot. 28, 3, 221–243. ISSN 01467239. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  110. R. A. Calvo and D. Peters. 2014. Positive Computing: Technology for Wellbeing and Human Potential. MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. R. Calvo, S. D’Mello, J. Gratch, A. Kappas, N. Bianchi-Berthouze, and A. Kleinsmith. 2014a. Automatic recognition of affective body expressions. In The Oxford Handbook of Affective Computing. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  112. R. Calvo, S. D’Mello, J. Gratch, A. Kappas, C.-C. Lee, J. Kim, A. Metallinou, C. Busso, S. Lee, and S. S. Narayanan. 2014b. Speech in affective computing. In The Oxford Handbook of Affective Computing. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  113. R. A. Calvo, S. D’Mello, J. M. Gratch, and A. Kappas. 2015. The Oxford Handbook of Affective Computing. Oxford University Press. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  114. E. Cambria, D. Das, S. Bandyopadhyay, and A. Feraco (Eds.). 2017. A Practical Guide to Sentiment Analysis, Vol. 5 of Socio-Affective Computing. Springer International Publishing, Cham. ISBN 978-3-319-55392-4. http://link.springer.com/10.1007/978-3-319-55394-8. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  115. D. T. Campbell. 1957. Factors relevant to the validity of experiments in social settings. Psychol. Bull. 54, 4, 297–312. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  116. A. Camurri, I. Lagerlöf, and G. Volpe. 2003. Recognizing emotion from dance movement: Comparison of spectator recognition and automated techniques. Int. J. Hum. Comput. Stud. 59, 213–225. ISSN 10715819. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  117. W. B. Cannon. 1927. The James–Lange theory of emotions: A critical examination and an alternative theory. Am. J. Psychol. 39, 1/4, 106–124. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  118. L. Canzian and M. Musolesi. 2015. Trajectories of depression. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing—UbiComp’15. ACM Press, New York, NY, 1293–1304. ISBN 9781450335744. http://dl.acm.org/citation.cfm?doid=2750858.2805845. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  119. H. Cao, D. G. Cooper, M. K. Keutmann, R. C. Gur, A. Nenkova, and R. Verma. 2014. CREMA-D: Crowd-sourced emotional multimodal actors dataset. IEEE Trans. Affect. Comput. 5 4, 377–390. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  120. G. Caridakis, L. Malatesta, L. Kessous, N. Amir, A. Raouzaiou, and K. Karpouzis. 2006. Modeling naturalistic affective states via facial and vocal expressions recognition. In Proceedings of the 8th International Conference on Multimodal Interfaces. ACM, 146–154. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  121. G. Caridakis, G. Castellano, L. Kessous, A. Raouzaiou, L. Malatesta, S. Asteriadis, and K. Karpouzis. 2007. Multimodal emotion recognition from expressive faces, body gestures and speech. In IFIP International Conference on Artificial Intelligence Applications and Innovations, Springer, 375–388. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  122. P. Carreno-Medrano, L. Tian, A. Allen, S. Sumartojo, M. Mintrom, E. Coronado, G. Venture, E. Croft, and D. Kulic. 2021. Aligning robot’s behaviours and users’ perceptions through participatory prototyping. arXiv preprint arXiv:2101.03660.Google ScholarGoogle Scholar
  123. J. M. Carroll. 2000. Making Use: Scenario-Based Design of Human–Computer Interactions. MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  124. D. V. Carvalho, E. M. Pereira, and J. S. Cardoso. 2019. Machine learning interpretability: A survey on methods and metrics. Electronics 8, 8, 832. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  125. S. Carvalho, J. Leite, S. Galdo-Álvarez, and O. F. Gonçalves. 2012. The emotional movie database (EMDB): A self-report and psychophysiological study. Appl. Psychophysiol. Biofeedback 37, 4, 279–294. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  126. C. Castelfranchi. 2000. Affective appraisal versus cognitive evaluation in social emotions and interactions. In A. Paiva (Ed.), Affective Interactions: Towards a New Generation of Computer Interfaces. Springer, Berlin, 76–106. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  127. J. Ceha, N. Chhibber, J. Goh, C. McDonald, P.-Y. Oudeyer, D. Kulić, and E. Law. 2019. Expression of curiosity in social robots: Design, perception, and effects on behaviour. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  128. A. Celeghin, M. Diano, A. Bagnis, M. Viola, and M. Tamietto. 2017. Basic emotions in human neuroscience: Neuroimaging and beyond. Front. Psychol. 8, 1432. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  129. C. Chalmers, P. Fergus, C. A. Curbelo Montanez, S. Sikdar, F. Ball, and B. Kendall. 2020. Detecting activities of daily living and routine behaviours in dementia patients living alone using smart meter load disaggregation. IEEE Trans. Emerg. Topics Comput. 1. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  130. S. Chancellor and M. D. Choudhury. 2020. Methods in predictive techniques for mental health status on social media: A critical review. NPJ Digit. Med. 3, 43. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  131. G. Chanel, J. J. Kierkels, M. Soleymani, and T. Pun. 2009. Short-term emotion assessment in a recall paradigm. Int. J. Hum. Comput. Stud. 67, 607–627. ISSN 10715819. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  132. D. Chatzakou, A. Vakali, and K. Kafetsios. 2017. Detecting variation of emotions in online activities. Expert Syst. Appl. 89, 318–332. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  133. S. Chen and Q. Jin. 2016. Multi-modal conditional attention fusion for dimensional emotion prediction. In MM 2016—Proceedings of the 2016 ACM Multimedia Conference. ISBN 9781450336031. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  134. J. Chen, Z. Chen, Z. Chi, and H. Fu. 2014. Emotion recognition in the wild with feature fusion and multiple kernel learning. In ICMI 2014—Proceedings of the 2014 International Conference on Multimodal Interaction. 508–513. ISBN 9781450328852. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  135. D. Chetverikov and R. Péteri. 2005. A brief survey of dynamic texture description and recognition. In Computer Recognition Systems. Springer, 17–26. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  136. H.-C. Chou and C.-C. Lee. 2019. Every rating matters: Joint learning of subjective labels and individual annotators for speech emotion classification. In ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 5886–5890. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  137. T. Choudhury and A. Pentland. 2002. The sociometer: A wearable device for understanding human networks. In Proceedings of CSCW-02 Workshop: Ad Hoc Communications and Collaboration in Ubiquitous Computing Environment. Kauai, HI.Google ScholarGoogle Scholar
  138. N. Churamani, P. Anton, M. Brügger, E. Fließwasser, T. Hummel, J. Mayer, W. Mustafa, H. G. Ng, T. L. C. Nguyen, Q. Nguyen, M. Soll, S. Springenberg, S. Griffiths, S. Heinrich, N. Navarro-Guerrero, E. Strahl, J. Twiefel, C. Weber, and S. Wermter. 2017. The impact of personalisation on human–robot interaction in learning scenarios. In Proceedings of the 5th International Conference on Human Agent Interaction. ACM, 171–180. .Google ScholarGoogle ScholarDigital LibraryDigital Library
  139. N. Churamani, S. Kalkan, and H. Gunes. 2020. Continual learning for affective robotics: Why, what and how? In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 425–431. IEEE. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  140. E. A. Clark, J. Kessinger, S. E. Duncan, M. A. Bell, J. Lahne, D. L. Gallagher, and S. F. O’Keefe. 2020. The facial action coding system for characterization of human affective response to consumer product-based stimuli: A systematic review. Front. Psychol. 11, 920. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  141. L. Clark, N. Pantidi, O. Cooney, P. Doyle, D. Garaialde, J. Edwards, B. Spillane, E. Gilmartin, C. Murad, C. Munteanu, V. Wade, and Benjamin R. Cowan. 2019. What makes a good conversation? Challenges in designing truly conversational agents. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  142. S. Cohen. 2004. Social relationships and health. Am. Psychol. 59, 8, 676–684. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  143. P. R. Cohen. 2018. Back to the future for dialogue research: A position paper. arXiv preprint arXiv:1812.01144.Google ScholarGoogle Scholar
  144. S. Cohen, T. Kamarck, and R. Mermelstein. 1983. A global measure of perceived stress. J. Health Soc. Behav. 24, 385–396. ISSN 00221465. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  145. I. Cohen, N. Sebe, A. Garg, L. S. Chen, and T. S. Huang. 2003. Facial expression recognition from video sequences: Temporal and static modeling. Comput. Vis. Image Underst. 91, 160–187. ISSN 10773142. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  146. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. August. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12, 2493–2537. Google ScholarGoogle ScholarDigital LibraryDigital Library
  147. K. Conger, R. Fausset, and S. F. Kovaleski. 2019. San Francisco bans facial recognition technology. The New York Times 14.Google ScholarGoogle Scholar
  148. M. G. Constantin, L. D. Stefan, B. Ionescu, C.-H. Demarty, M. Sjoberg, M. Schedl, and G. Gravier. 2020. Affect in multimedia: Benchmarking violent scenes detection. IEEE Trans. Affect. Comput. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  149. R. Cook, G. Bird, C. Catmur, C. Press, and C. Heyes. 2014. Mirror neurons: From origin to function. Behav. Brain Sci. 37, 2, 177–192. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  150. T. F. Cootes, G. J. Edwards, and C. J. Taylor. 1998. Active appearance models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). ISBN 3540646132. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  151. T. F. Cootes, G. J. Edwards, and C. J. Taylor. 2001. Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23, 6, 681–685. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  152. D. T. Cordaro, R. Sun, D. Keltner, S. Kamble, N. Huddar, and G. McNeil. 2018. Universals and cultural variations in 22 emotional expressions across five cultures. Emotion 18, 1, 75–93. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  153. M. O. Cordel, S. Fan, Z. Shen, and M. S. Kankanhalli. 2019. Emotion-aware human attention prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4026–4035.Google ScholarGoogle Scholar
  154. C. Corneanu, F. Noroozi, D. Kaminska, T. Sapinski, S. Escalera, and G. Anbarjafari. 2018. Survey on emotional body gesture recognition. IEEE Trans. Affect. Comput. 12, 505–523. ISSN 19493045. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  155. E. Coronado, D. Deuff, P. Carreno-Medrano, L. Tian, D. Kulić, S. Sumartojo, F. Mastrogio-Vanni, and G. Venture. 2021. Towards a modular and distributed end-user development framework for human–robot interaction. IEEE Access 9, 12675–12692. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  156. I. Cos, L. Cañamero, G. M. Hayes, and A. Gillies. December. 2013. Hedonic value: Enhancing adaptation for motivated agents. Adapt. Behav. 21, 6, 465–483. ISSN 1059-7123. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  157. S. Cosentino, E. I. Randria, J.-Y. Lin, T. Pellegrini, S. Sessa, and A. Takanishi. 2018. Group emotion recognition strategies for entertainment robots. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 813–818. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  158. J. Costa, A. T. Adams, M. F. Jung, F. Guimbretière, and T. Choudhury. 2016. EmotionCheck: Leveraging bodily signals and false feedback to regulate our emotions. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 758–769. ACM. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  159. A. S. Cowen and D. Keltner. 2017. Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proc. Natl. Acad. Sci. 114, 38, E7900–E7909. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  160. R. Cowie. 2015. Ethical issues in affective computing. In The Oxford Handbook of Affective Computing. Oxford University Press, 334–348. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  161. R. Cowie and E. Douglas-Cowie. 1996. Automatic statistical analysis of the signal and prosodic signs of emotion in speech. In International Conference on Spoken Language Processing, ICSLP, Proceedings. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  162. R. Cowie, E. Douglas-Cowie, S. Savvidou, E. McMahon, M. Sawey, and M. Schröder. 2000. FEELTRACE: An instrument for recording perceived emotion in real time. ISCA Workshop on Speech Emotion. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  163. CSS Electronics. 2018. OBD2 data logger—Easily record your car data. Retrieved September 18, 2020, from https://www.csselectronics.com/screen/page/obd2-data-logger-sd-memory-convert/.Google ScholarGoogle Scholar
  164. B. M. Cuff, S. J. Brown, L. Taylor, and D. J. Howat. 2016. Empathy: A review of the concept. Emot. Rev. 8, 2, 144–153. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  165. M. L. Cummings. 2006. Automation and accountability in decision support system interface design. J. Technol. Stud. 32, 23–31. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  166. N. Dalal and B. Triggs. 2005. Histograms of oriented gradients for human detection. In Proceedings—2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005. 886–893. ISBN 0769523722. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  167. A. R. Damasio. 1994. Descartes’ Error. Emotion, Reason and the Human Brain. Avon Books, New York.Google ScholarGoogle Scholar
  168. E. S. Dan-Glauser and K. R. Scherer. June. 2011. The Geneva Affective PicturE Database (GAPED): A new 730-picture database focusing on valence and normative significance. Behav. Res. Methods 43, 2, 468–477. ISSN 1554-3528. http://link.springer.com/10.3758/s13428-011-0064-1. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  169. C. Darwin. 1873. The Expression of the Emotions in Man and Animals. D. Appleton. https://books.google.com/books?id=4jp9AAAAMAAJ.Google ScholarGoogle Scholar
  170. K. Dautenhahn. 2007. Methodology & themes of human–robot interaction: A growing research field. Int. J. Adv. Robot. Syst. 4, 1, 15. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  171. K. Dautenhahn, I. Werry, J. Rae, P. Dickerson, P. Stribling, and B. Ogden. 2002. Robotic playmates. In L. Cañamero, B. Edmonds, K. Dautenhahn, and A. Bond (Eds.), Socially Intelligent Agents. Springer, Boston, MA, 117–124. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  172. S. K. Davis, M. Morningstar, M. A. Dirks, and P. Qualter. 2020. Ability emotional intelligence: What about recognition of emotion in voices? Pers. Individ. Dif. 160, 10993, 1–5. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  173. M. De Graaf, S. Ben Allouch, and J. Van Dijk. 2017. Why do they refuse to use my robot? Reasons for non-use derived from a long-term home study. In Proceedings of the 2017 ACM/IEEE International Conference on Human–Robot Interaction. ACM, 224–233. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  174. C. J. De Luca. 1997. The use of surface electromyography in biomechanics. J. Appl. Biomech. 13, 2, 135–163. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  175. M. De Meijer. 1989. The contribution of general features of body movement to the attribution of emotions. J. Nonverbal Behav. 13, 4, 247–268. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  176. T. H. M. de Oliveira and M. Painho. June. 2015. Emotion & stress mapping: Assembling an ambient geographic information-based methodology in order to understand smart cities. In 2015 10th Iberian Conference on Information Systems and Technologies (CISTI), Aveiro, Portugal. IEEE, 1–4. ISBN 978-989-98434-5-5. http://ieeexplore.ieee.org/document/7170469/. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  177. L. C. De Silva, T. Miyasato, and R. Nakatsu. 1997. Facial emotion recognition using multimodal information. In Proceedings of the International Conference on Information, Communications and Signal Processing. ICICS 1, 397–401. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  178. F. B. de Waal and S. D. Preston. 2017. Mammalian empathy: Behavioural manifestations and neural basis. Nat. Rev. Neurosci. 18, 8, 498–509. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  179. C. Debes, A. Merentitis, S. Sukhanov, M. Niessen, N. Frangiadakis, and A. Bauer. 2016. Monitoring activities of daily living in smart homes: Understanding human behavior. IEEE Signal Process. Mag. 33, 2, 81–94. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  180. J. Decety and P. L. Jackson. 2004. The functional architecture of human empathy. Behav. Cogn. Neurosci. Rev. 3, 2, 71–100. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  181. J. Decety and M. Meyer. 2008. From emotion resonance to empathic understanding: A social developmental neuroscience account. Dev. Psychopathol. 20, 4, 1053–1080. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  182. E. Delaherche, M. Chetouani, A. Mahdhaoui, C. Saint-Georges, S. Viaux, and D. Cohen. 2012. Interpersonal synchrony: A survey of evaluation methods across disciplines. IEEE Trans. Affect. Comput. 3, 3, 349–365. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  183. A. Delfanti and B. Frey. 2020. Humanly extended automation or the future of work seen through amazon patents. Sci. Technol. Human Values 46. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  184. P. Denman, E. Lewis, S. Prasad, J. Healey, H. Syed, and L. Nachman. 2018. Affsens: A mobile platform for capturing affect in context. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. Association for Computing Machinery, New York, NY, USA, 321–326, 6. ISBN: 9781450359412. DOI: . Barcelona, Spain, MobileHCI ’18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  185. I. Deutsch, H. Erel, M. Paz, G. Hoffman, and O. Zuckerman. 2019. Home robotic devices for older adults: Opportunities and concerns. Comput. Human Behav. 98, 122–133. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  186. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.Google ScholarGoogle Scholar
  187. A. Dhall, R. Goecke, J. Joshi, M. Wagner, and T. Gedeon. 2013. Emotion recognition in the wild challenge 2013. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction. 509–516. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  188. A. Dhall, G. Sharma, R. Goecke, and T. Gedeon. 2020. EmotiW 2020: Driver gaze, group emotion, student engagement and physiological signal based challenges. In Proceedings of the 2020 International Conference on Multimodal Interaction. 784–789. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  189. J. Diemer, G. W. Alpers, H. M. Peperkorn, Y. Shiban, and A. Mühlberger. January. 2015. The impact of perception and presence on emotional reactions: A review of research in virtual reality. Front. Psychol. 6. ISSN 1664-1078. http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00026/abstract. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  190. S. D’Mello and R. A. Calvo. 2013. Beyond the basic emotions: What should affective computing compute? CHI’13 Extended Abstracts on Human Factors in Computing Systems, 2287–2294. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  191. S. K. D’Mello and J. Kory. 2015. A review and meta-analysis of multimodal affect detection systems. ACM Comput. Surv. 47, 1–36. DOI: . ISSN 15577341.Google ScholarGoogle ScholarDigital LibraryDigital Library
  192. A. Dobrosovestnova and G. Hannibal. 2020. Teachers’ disappointment: Theoretical perspective on the inclusion of ambivalent emotions in human–robot interactions in education. In Proceedings of the 2020 ACM/IEEE International Conference on Human–Robot Interaction. 471–480. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  193. F. Doshi-Velez and B. Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.Google ScholarGoogle Scholar
  194. E. Douglas-Cowie, N. Campbell, R. Cowie, and P. Roach. 2003. Emotional speech: Towards a new generation of databases. Speech Commun. 40, 1, 33–60. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  195. E. Douglas-Cowie, R. Cowie, I. Sneddon, C. Cox, O. Lowry, M. McRorie, J. Martin, L. Devillers, S. Abrilian, A. Batliner, N. Amir, and K. Karpouzis. 2007. The HUMAINE database: Addressing the collection and annotation of naturalistic and induced emotional data. In Affective Computing and Intelligent Interaction. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  196. D. Dua and C. Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml.Google ScholarGoogle Scholar
  197. B. Dudzik, H. Hung, M. Neerincx, and J. Broekens. 2020. Investigating the influence of personal memories on video-induced emotions. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. 53–61. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  198. B. R. Duffy. 2003. Anthropomorphism and the social robot. Rob. Auton. Syst. 42, 3–4, 177–190. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  199. D. Dupré, E. G. Krumhuber, D. Küster, and G. J. McKeown. April. 2020. A performance comparison of eight commercially available automatic classifiers for facial affect recognition. PLoS One 15, 4, 1–17. . DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  200. C. Dwork. 2008. Differential privacy: A survey of results. In International Conference on Theory and Applications of Models of Computation. Springer, 1–19. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  201. I. Dziobek, S. Fleck, E. Kalbe, K. Rogers, J. Hassenstab, M. Brand, J. Kessler, J. K. Woike, O. T. Wolf, and A. Convit. 2006. Introducing MASC: A movie for the assessment of social cognition. J. Autism Dev. Disord. 36, 5, 623–636. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  202. N. Eagle and A. Pentland. 2006. Reality mining: Sensing complex social systems. Pers. Ubiquitous Comput. 10, 4, 255–268. ISSN 1617-4909. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  203. J. D. Eastwood, D. Smilek, and P. M. Merikle. 2001. Differential attentional guidance by unattended faces expressing positive and negative emotion. Percept. Psychophys. 63, 6, 1004–1013. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  204. T. Eerola and J. K. Vuoskoski. January. 2011. A comparison of the discrete and dimensional models of emotion in music. Psychol. Music 39, 1, 18–49. ISSN 0305-7356. http://journals.sagepub.com/doi/10.1177/0305735610362821. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  205. N. Eisenberg. 2001. The core and correlates of affective social competence. Soc. Dev. 10, 1, 120–124. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  206. P. Ekkekakis and J. A. Russell. 2013. The Measurement of Affect, Mood, and Emotion. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  207. P. Ekman. 1992. An argument for basic emotions. Cogn. Emot. 6 (3–4), 169–200. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  208. P. Ekman. 2005. Basic emotions. In Handbook of Cognition and Emotion. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  209. P. Ekman and W. V. Friesen. 1971. Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 17 (2), 124–129.Google ScholarGoogle ScholarCross RefCross Ref
  210. P. Ekman and W. V. Friesen. 1977. Manual for the facial action coding system. Consult. Psychol. ISSN: 0148-0227.Google ScholarGoogle Scholar
  211. P. Ekman and W. V. Friesen. 1978. Facial Action Coding System: Investigator’s Guide. Consulting Psychologists Press.Google ScholarGoogle Scholar
  212. P. Ekman and W. V. Friesen. 2003. Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues. ISHK.Google ScholarGoogle Scholar
  213. P. Ekman, W. V. Friesen, and P. Ellsworth. 1972. Emotion in the Human Face: Guidelines for Research and an Integration of Findings. Pergamon Press, Elmsford, NY.Google ScholarGoogle Scholar
  214. P. Ekman, W. V. Friesen, and J. C. Hager. 2002. Facial Action Coding System: The Manual on CD ROM. Salt Lake City, A Human Face.Google ScholarGoogle Scholar
  215. N. El Haouij, J.-M. Poggi, S. Sevestre-Ghalila, R. Ghozi, and M. Jaïdane. 2018. AffectiveROAD system and database to assess driver’s attention. In Proceedings of the 33rd Annual ACM Symposium on Applied Computing (SAC’18). New York, NY, 800–803. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  216. J. Elster. 1989. Social norms and economic theory. J. Econ. Perspect. 3 (4), 99–117. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  217. C. Epp, M. Lippold, and R. L. Mandryk. 2011. Identifying emotional states using keystroke dynamics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’11. Association for Computing Machinery, New York, NY, 715–724. ISBN: 9781450302289. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  218. S. Eriksén. 2002. Designing for accountability. In Proceedings of the Second Nordic Conference on Human–Computer Interaction. 177–186. Google ScholarGoogle ScholarDigital LibraryDigital Library
  219. G. W. Evans, C. Smith, and K. Pezdek. June. 1982. Cognitive maps and urban form. J. Am. Plann. Assoc. 48, 2, 232–244. ISSN: 0194-4363. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  220. F. Eyben, M. Wöllmer, A. Graves, B. Schuller, E. Douglas-Cowie, and R. Cowie. 2010a. On-line emotion recognition in a 3-D activation–valence–time continuum using acoustic and linguistic cues. J. Multimodal User Interfaces 3, 1–2, 7–19.Google ScholarGoogle ScholarCross RefCross Ref
  221. F. Eyben, M. Wöllmer, and B. Schuller. 2010b. OpenSmile: The Munich versatile and fast open-source audio feature extractor. In Proceedings of the 18th ACM International Conference on Multimedia, MM’10. Association for Computing Machinery, New York, NY, 1459–1462. ISBN: 9781605589336. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  222. F. Eyben, F. Weninger, and B. Schuller. 2013. Affect recognition in real-life acoustic conditions—A new perspective on feature selection. In Proceedings INTERSPEECH 2013, 14th Annual Conference of the International Speech Communication Association, Lyon, France.Google ScholarGoogle Scholar
  223. F. Eyben, K. R. Scherer, B. W. Schuller, J. Sundberg, E. André, C. Busso, L. Y. Devillers, J. Epps, P. Laukka, S. S. Narayanan, and K. P. Truong. 2015. The Geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computing. IEEE Trans. Affect. Comput. 7, 2, 190–202. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  224. B. Farahi. 2018. Heart of the matter: Affective computing in fashion and architecture. Proceedings of the 38th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA). 206–215. http://www.behnazfarahi.com/assets/img/Affective%20Computing%20in%20Fashion%20and%20Architecture.pdf.Google ScholarGoogle Scholar
  225. O. Faust, Y. Hagiwara, T. J. Hong, O. S. Lih, and U. R. Acharya. 2018. Deep learning for healthcare applications based on physiological signals: A review. Comput. Methods Programs Biomed. 161, 1–13. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  226. Federal Energy Regulatory Commission. 2019 Assessment of Demand Response and Advanced Metering. Staff report, United States Department of Energy.Google ScholarGoogle Scholar
  227. O. FeldmanHall and L. J. Chang. 2018. Social learning: Emotions aid in optimizing goal-directed social behavior. In Goal-Directed Decision Making. Elsevier, 309–330.Google ScholarGoogle Scholar
  228. A. Felnhofer, O. D. Kothgassner, M. Schmidt, A.-K. Heinzle, L. Beutl, H. Hlavacs, and I. Kryspin-Exner. October 2015. Is virtual reality emotionally arousing? Investigating five emotion inducing virtual park scenarios. Int. J. Hum.-Comput. Stud. 82, 48–56. DOI: . ISSN: 10715819, https://linkinghub.elsevier.com/retrieve/pii/S1071581915000981.Google ScholarGoogle ScholarDigital LibraryDigital Library
  229. C. B. Ferster and B. F. Skinner. 1957. Schedules of Reinforcement. Prentice-Hall.Google ScholarGoogle Scholar
  230. C. Filippini, D. Perpetuini, D. Cardone, A. M. Chiarelli, and A. Merla. 2020. Thermal infrared imaging-based affective computing and its application to facilitate human robot interaction: A review. Appl. Sci. 10 (8), 2–23. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  231. J. Finocchiaro, R. Maio, F. Monachou, G. K. Patro, M. Raghavan, A.-A. Stoica, and S. Tsirtsis. 2020. Bridging machine learning and mechanism design towards algorithmic fairness. arXiv preprint arXiv:2010.05434. Google ScholarGoogle ScholarDigital LibraryDigital Library
  232. D. Fischer, A. W. McHill, A. Sano, R. W. Picard, L. K. Barger, C. A. Czeisler, E. B. Klerman, and A. J. Phillips. 2020. Irregular sleep and event schedules are associated with poorer self-reported well-being in US college students. Sleep. ISSN: 15509109. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  233. S. T. Fiske, A. J. Cuddy, and P. Glick. 2007. Universal dimensions of social cognition: Warmth and competence. Trends Cogn. Sci. 11, 2, 77–83. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  234. M. Fitzgerald and T. McClelland. 2017. What makes a mobile app successful in supporting health behaviour change? Health Educ. J. 76, 3, 373–381. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  235. K. K. Fitzpatrick, A. Darcy, and M. Vierhile. June. 2017. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot ): A randomized controlled trial. JMIR Ment. Health 4, 2, e19. ISSN: 2368-7959. http://mental.jmir.org/2017/2/e19/. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  236. A. W. Flores, K. Bechtel, and C. T. Lowenkamp. 2016. False positives, false negatives, and false analyses: A rejoinder to “Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks.” Fed. Probat. 80, 38.Google ScholarGoogle Scholar
  237. K. Florio, V. Basile, M. Lai, and V. Patti. 2019. Leveraging hate speech detection to investigate immigration-related phenomena in Italy. In 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 1–7. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  238. S. Folkman and R. S. Lazarus. 1984. Stress, Appraisal, and Coping. Springer Publishing Company, New York. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  239. M. Fölster, U. Hess, and K. Werheid. 2014. Facial age affects emotional expression decoding. Front. Psychol. 5, 30. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  240. M. Franek. 2013. Environmental factors influencing pedestrian walking speed. Percept. Mot. Skills 116, 3, 992–1019. ISSN: 0031-5125. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  241. C. P. Friedman, A. S. Elstein, F. M. Wolf, G. C. Murphy, T. M. Franz, P. S. Heckerling, P. L. Fine, T. M. Miller, and V. Abraham. 1999. Enhancement of clinicians’ diagnostic reasoning by computer-based consultation: A multisite study of 2 systems. JAMA 282, 19, 1851–1856. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  242. H. Frumkin, L. Frank, and R. J. Jackson. July. 2004. Urban Sprawl and Public Health: Designing, Planning, and Building for Healthy Communities. Island Press. ISBN: 978-1-59726-631-4. Google-Books-ID: Xk06al1sAmUC.Google ScholarGoogle Scholar
  243. J. Fugate, H. Gouzoules, and L. F. Barrett. 2010. Reading chimpanzee faces: Evidence for the role of verbal labels in categorical perception of emotion. Emotion 10, 4, 544.Google ScholarGoogle ScholarCross RefCross Ref
  244. T. Fukuda, J. Taguri, F. Arai, M. Nakashima, D. Tachibana, and Y. Hasegawa. 2002. Facial expression of robot face for human–robot mutual communication. In Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), Vol. 1. IEEE, 46–51.Google ScholarGoogle Scholar
  245. A. Furnham, S. C. Richards, and D. L. Paulhus. 2013. The Dark Triad of personality: A 10 year review. Soc. Personal. Psychol. Compass 7, 3, 199–216. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  246. I. Gabriel. 2020. Artificial intelligence, values, and alignment. Minds Mach. 30, 3, 411–437. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  247. V. Gallese, C. Keysers, and G. Rizzolatti. 2004. A unifying view of the basis of social cognition. Trends Cogn. Sci. 8, 9, 396–403. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  248. M. Garbarino, M. Lai, D. Bender, R. W. Picard, and S. Tognetti. November. 2014. Empatica E3—A wearable wireless multi-sensor device for real-time computerized biofeedback and data acquisition. In 2014 4th International Conference on Wireless Mobile Communication and Healthcare—Transforming Healthcare Through Innovations in Mobile and Wireless Technologies (MOBIHEALTH). 39–42. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  249. R. Garg and S. Sengupta. 2020. He is just like me: A study of the long-term use of smart speakers by parents and children. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4, 1, 1–24. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  250. P. Gebhard. 2005. ALMA: A layered model of affect. In Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems. ACM, 29–36. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  251. A. Gebhart and M. Price. 2020. The best nest and google assistant devices in 2020. Accessed September 18, 2020, https://www.cnet.com/news/best-home-security-cameras-of-2020-arlo-google-nest-and-more/.Google ScholarGoogle Scholar
  252. M. Gerlach, B. Farb, W. Revelle, and L. A. Nunes Amaral. 2018. A robust data-driven approach identifies four personality types across four large data sets. Nat. Human Behav. 2, 10, 735–742.Google ScholarGoogle ScholarCross RefCross Ref
  253. A. Ghandeharioun and R. Picard. 2017. BrightBeat: Effortlessly influencing breathing for cultivating calmness and focus. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA’17. Association for Computing Machinery. New York, NY, 1624–1631. ISBN: 9781450346566. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  254. A. Ghandeharioun, A. Azaria, S. Taylor, and R. W. Picard. 2016. “kind and grateful”: A context-sensitive smartphone app utilizing inspirational content to promote gratitude. Psychol. Well Being. 6, 1, 1–21. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  255. A. Ghandeharioun, D. McDuff, M. Czerwinski, and K. Rowan. 2019. EMMA: An emotion-aware wellbeing chatbot. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). 1–7.Google ScholarGoogle Scholar
  256. G. Ghinita, P. Kalnis, A. Khoshgozaran, C. Shahabi, and K.-L. Tan. 2008. Private queries in location based services: Anonymizers are not necessary. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data. 121–132. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  257. X. Glorot, A. Bordes, and Y. Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th International Conference on Machine Learning (ICML-11). 513–520. Google ScholarGoogle ScholarDigital LibraryDigital Library
  258. K. Goddard, A. Roudsari, and J. C. Wyatt. 2012. Automation bias: A systematic review of frequency, effect mediators, and mitigators. J. Am. Med. Inform. Assoc. 19, 1, 121–127. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  259. A. L. Goldberger, L. Amaral, L. Glass, J. Hausdorff, P. C. Ivanov, R. Mark, J. E. Mietus, G. B. Moody, C. K. Peng, and H. E. Stanley. 2000. Physiobank, physiotoolkit, and physionet: Components of a new research resource for complex physiologic signals. Circulation 101, 23, e215–e220. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  260. S. Golestan, P. Soleiman, and H. Moradi. 2018. A comprehensive review of technologies used for screening, assessment, and rehabilitation of autism spectrum disorder. arXiv preprint arXiv:1807.10986.Google ScholarGoogle Scholar
  261. M. Gönen and E. Alpaydin. 2011. Multiple kernel learning algorithms. J. Mach. Learn. Res. 12, 2211–2268. ISSN: 15324435. Google ScholarGoogle ScholarDigital LibraryDigital Library
  262. S. Góngora Alonso, S. Hamrioui, I. de la Torre Díez, E. Motta Cruz, M. López-Coronado, and M. Franco. 2018. Social robots for people with aging and dementia: A systematic review of literature. Telemed E-Health 257, 533–540. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  263. D. Good. 2000. Individuals, interpersonal relations, and trust. In Trust: Making and breaking cooperative relations. Department of Sociology, University of Oxford, Oxford, UK, 31–48.Google ScholarGoogle Scholar
  264. J. Gorbova, I. Lusi, A. Litvin, and G. Anbarjafari. 2017. Automated screening of job candidate based on multimodal video processing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 29–35. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  265. D. Gordeev and R. Potapova. 2016. Detecting state of aggression in sentences using CNN. In International Conference on Speech and Computer. Springer, 240–245.Google ScholarGoogle Scholar
  266. K. Gotham, S. Risi, A. Pickles, and C. Lord. 2007. The Autism Diagnostic Observation Schedule: Revised algorithms for improved diagnostic validity. J. Autism Dev. Disord. 37, 4, 613. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  267. M. Goudbeek and K. Scherer. 2010. Beyond arousal: Valence and potency/control cues in the vocal expression of emotion. J. Acoust. Soc. Am. 128, 3, 1322–1336. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  268. D. Govind and S. M. Prasanna. 2013. Expressive speech synthesis: A review. Int. J. Speech Technol. 16, 2, 237–260. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  269. A. A. Grandey, L. Houston III, and D. R. Avery. 2019. Fake it to make it? Emotional labor reduces the racial disparity in service performance judgments. J. Manag. 45, 5, 2163–2192. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  270. D. Grandjean, D. Sander, and K. R. Scherer. 2008. Conscious emotional experience emerges as a function of multilevel, appraisal-driven response synchronization. Conscious. Cogn. 17, 2, 484–495. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  271. E. Granholm and S. R. Steinhauer. March. 2004. Pupillometric measures of cognitive and emotional processes. In Int. J. Psychophysiol. 52, 1–6. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  272. A. Graves, S. Fernández, and J. Schmidhuber. 2005. Bidirectional LSTM networks for improved phoneme classification and recognition. In International Conference on Artificial Neural Networks. Springer, 799–804. Google ScholarGoogle ScholarDigital LibraryDigital Library
  273. K. H. Greenaway, E. K. Kalokerinos, and L. A. Williams. June. 2018. Context is everything (in emotion research). Soc. Personal. Psychol. Compass 12, 6, e12393. ISSN: 17519004. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  274. S. Greene, H. Thapliyal, and A. Caban-Holt. 2016. A survey of affective computing for stress detection: Evaluating technologies in stress detection for better health. IEEE Consum. Electron. Mag. 5, 4, 44–56. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  275. F. Grond, R. Motta-Ochoa, N. Miyake, T. Tembeck, M. Park, and S. Blain-Moraes. 2019. Participatory design of affective technology: Interfacing biomusic and autism. IEEE Trans. Affect. Comput. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  276. C. T. Gross and N. S. Canteras. 2012. The many paths to fear. Nat. Rev. Neurosci. 13, 9, 651. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  277. H.-M. Gross, A. Scheidig, K. Debes, E. Einhorn, M. Eisenbach, S. Mueller, T. Schmiedel, T. Q. Trinh, C. Weinrich, T. Wengefeld, and A. Bley. 2017. ROREAS: Robot coach for walking and orientation training in clinical post-stroke rehabilitation—Prototype implementation and evaluation in field trials. Auton. Robots 41, 3, 679–698. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  278. H. Gunes, B. Schuller, M. Pantic, and R. Cowie. 2011. Emotion representation, analysis and synthesis in continuous space: A survey. In Face and Gesture 2011. IEEE, 827–834. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  279. F. Guo, F. Li, W. Lv, L. Liu, and V. G. Duffy. 2020. Bibliometric analysis of affective computing researches during 1999~2018. Int. J. Hum. Comput. Interact. 36, 9, 801–814. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  280. I. Gupta, J. Healey, and G. Theocharous. 2019. Sense-able lunch recommendations. In Proceedings of the 21st International Conference on Human–Computer Interaction with Mobile Devices and Services, MobileHCI’19. Association for Computing Machinery, New York, NY. ISBN: 9781450368254. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  281. B. Guthier, R. Alharthi, R. Abaalkhail, and A. El Saddik. November. 2014. Detection and visualization of emotions in an affect-aware city. In Proceedings of the 1st International Workshop on Emerging Multimedia Applications and Services for Smart Cities, EMASC’14. Association for Computing Machinery, Orlando, FL, 23–28. ISBN: 978-1-4503-3126-5. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  282. R. E. Haamer, E. Rusadze, I. Lüsi, T. Ahmed, S. Escalera, and G. Anbarjafari. July. 2018. Review on emotion recognition databases. In Human–Robot Interaction—Theory and Application. InTech. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  283. R. S. Hag Ali and N. El Gayar. 2019. Sentiment analysis using unlabeled email data. In 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE). 328–333.Google ScholarGoogle Scholar
  284. A. G. Halberstadt and F. T. Lozada. 2011. Emotion development in infancy through the lens of culture. Emot. Rev. 3, 2, 158–168. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  285. A. G. Halberstadt, S. A. Denham, and J. C. Dunsmore. 2001. Affective social competence. Soc. Dev. 10, 1, 79–119. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  286. M. A. Hall. 1999. Correlation-based feature selection for machine learning. PhD thesis, University of Waikato Hamilton. https://www.cs.waikato.ac.nz/ml/publications/1999/99MH-Thesis.pdf.Google ScholarGoogle Scholar
  287. M. Hamilton. 1960. A rating scale for depression. J. Neurol. Neurosurg. Psychiatry 23, 56–62. ISSN: 00223050. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  288. S. Han, J. S. Lerner, and D. Keltner. 2007. Feelings and consumer decision making: The Appraisal-Tendency Framework. J. Consum. Psychol. 17, 3, 158–168. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  289. S. L. Handy, M. G. Boarnet, R. Ewing, and R. E. Killingsworth. August. 2002. How the built environment affects physical activity: Views from urban planning. Am. J. Prev. Med. 23, 2, Supplement 1, 64–73. ISSN: 0749-3797. http://www.sciencedirect.com/science/article/pii/S0749379702004750. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  290. R. D. Hare. 2003. The Hare Psychopathy Checklist Revised (2nd. ed.). Multi-Health Systems.Google ScholarGoogle Scholar
  291. E. Hatfield, J. T. Cacioppo, and R. L. Rapson. 1993. Emotional contagion. Curr. Dir. Psychol. Sci. 2, 3, 96–100. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  292. J. Hawkins. 2017. Special report: Can we copy the brain?—What intelligent machines need to learn from the neocortex. IEEE Spectr. 54, 6, 34–71. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  293. J. Healey. 2011. Recording affect in the field: Towards methods and metrics for improving ground truth labels. In S. K. D’Mello, A. C. Graesser, B. W. Schuller, and J. Martin (Eds.), Affective Computing and Intelligent Interaction—4th International Conference, ACII 2011. Memphis, TN, October 9–12, 2011, Proceedings, Part I, Vol. 6974 of Lecture Notes in Computer Science. Springer, 107–116. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  294. J. Healey and B. Logan. October. 2005. Wearable wellness monitoring using ECG and accelerometer data—IEEE Conference Publication. IEEE, Osaka, Japan. ISBN: 0-7695-2419-2. https://ieeexplore.ieee.org/abstract/document/1550820. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  295. J. Healey and R. W. Picard. 1998. StartleCam: A cybernetic wearable camera. In Proceedings of the 2nd IEEE International Symposium on Wearable Computers. Pittsburgh, 42–49. Google ScholarGoogle ScholarDigital LibraryDigital Library
  296. J. Healey and R. Picard. June. 2005. Detecting stress during real-world driving tasks using physiological sensors. IEEE Trans. Intell. Transp. Syst. 6, 2, 156–166. ISSN: 1558-0016. Conference name: IEEE Transactions on Intelligent Transportation Systems. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  297. J. Healey, G. Theocharous, and B. Kveton. 2010a. Does my driving scare you? In Adjunct Proceeding of 2nd International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI 2010), AutomotiveUI’10. ACM, Pittsburgh, PA.Google ScholarGoogle Scholar
  298. J. Healey, L. Nachman, S. Subramanian, J. Shahabdeen, and M. Morris. 2010b. Out of the lab and into the fray: Towards modeling emotion in everyday life. In Proceedings of the 8th International Conference on Pervasive Computing, Pervasive’10. Springer-Verlag, Berlin, 156–173. ISBN: 3642126537. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  299. J. Healey, B. Chamberlain, L. Tian, A. Sano, and W. Hsu. 2020. 4th IJCAI Workshop on Artificial Intelligence in Affective Computing. https://sites.google.com/usu.edu/affcomp2020.Google ScholarGoogle Scholar
  300. H. Health, G. W. Evans, J. M. Mccoy, and M. Mccoy. 1998. When Buildings Don’t Work: The Role of Architecture in Human Health.Google ScholarGoogle Scholar
  301. F. Hegel, C. Muhl, B. Wrede, M. Hielscher-Fastabend, and G. Sagerer. 2009. Understanding social robots. In 2009 Second International Conferences on Advances in Computer–Human Interactions. IEEE, 169–174. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  302. A. Heimerl, T. Baur, F. Lingenfelser, J. Wagner, and E. André. 2019. NOVA—A tool for eXplainable cooperative machine learning. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 109–115.Google ScholarGoogle Scholar
  303. J. Henrich, S. J. Heine, and A. Norenzayan. 2010. The weirdest people in the world? Behav. Brain Sci. 33, 2–3, 61–83. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  304. J. C. Henry. 2006. Electroencephalography: Basic principles, clinical applications, and related fields, fifth edition. Neurology 67, 11, 2092–2092. ISSN: 0028-3878. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  305. J. Hernandez, D. McDuff, and R. W. Picard. December. 2015. Biowatch: Estimation of heart and breathing rates from wrist motions. In Proceedings of the 2015 9th International Conference on Pervasive Computing Technologies for Healthcare, PervasiveHealth 2015. Institute of Electrical and Electronics Engineers Inc., 169–176. ISBN: 9781631900457. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  306. B. Herrmann, C. Thöni, and S. Gächter. 2008. Antisocial punishment across societies. Science 319, 5868, 1362–1367. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  307. S. Herse, J. Vitale, M. Tonkin, D. Ebrahimian, S. Ojha, B. Johnston, W. Judge, and M.-A. Williams. 2018. Do you trust me, blindly? Factors influencing trust towards a robot recommender system. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 7–14. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  308. M. J. Hertenstein, R. Holmes, M. McCullough, and D. Keltner. 2009. The communication of emotion via touch. Emotion 9, 4, 566–573. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  309. C. Heyes. 2018. Empathy is not in our genes. Neurosci. Biobehav. Rev. 95, 499–507. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  310. G. E. Hinton and R. R. Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. Science 313, 504–507. ISSN: 00368075. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  311. S. Hoermann, K. L. McCabe, D. N. Milne, and R. A. Calvo. 2017. Application of synchronous text-based dialogue systems in mental health interventions: Systematic review. J. Med. Internet Res. 19, 8. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  312. G. Hoffman. 2019. Anki, Jibo, and Kuri: What we can learn from social robots that didn’t make it. IEEE Spectrum. https://spectrum.ieee.org/anki-jibo-and-kuri-what-we-can-learn-from-social-robotics-failures.Google ScholarGoogle Scholar
  313. G. Hoffman and X. Zhao. 2020. A primer for conducting experiments in human–robot interaction. ACM Trans. Hum. Robot Interact. 10, 1, 1–31. Google ScholarGoogle ScholarDigital LibraryDigital Library
  314. R. R. Hoffman, S. T. Mueller, G. Klein, and J. Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608.Google ScholarGoogle Scholar
  315. S. S. Honig and T. Oron-Gilad. 2018. Understanding and resolving failures in human–robot interaction: Literature review and model development. Front. Psychol. 9, 861. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  316. M. E. Hoque, D. J. McDuff, and R. W. Picard. 2012. Exploring temporal patterns in classifying frustrated and delighted smiles. IEEE Trans. Affect. Comput. 3, 323–334 ISSN: 19493045. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  317. H. Hotelling. 1933. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 24, 417–441. ISSN: 00220663. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  318. K. Hovsepian, M. Al’absi, E. Ertin, T. Kamarck, M. Nakajima, and S. Kumar. 2015. Cstress: Towards a gold standard for continuous stress assessment in the mobile environment. In UbiComp 2015—Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ISBN: 9781450335744. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  319. A. Howard, C. Zhang, and E. Horvitz. 2017. Addressing bias in machine learning algorithms: A pilot study on emotion recognition for intelligent systems. In 2017 IEEE Workshop on Advanced Robotics and Its Social Impacts (ARSO). IEEE, 1–7.Google ScholarGoogle Scholar
  320. Y. Huang and S. M. Khan. 2017. DyadGAN: Generating facial expressions in dyadic interactions. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2259–2266. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  321. Z. Huang, M. Dong, Q. Mao, and Y. Zhan. 2014. Speech emotion recognition using CNN. In Proceedings of the 22nd ACM International Conference on Multimedia. ACM, 801–804. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  322. J. Huberty, J. Green, C. Glissmann, L. Larkey, M. Puzia, and C. Lee. 2019. Efficacy of the mindfulness meditation mobile app “Calm” to reduce stress among college students: Randomized controlled trial. JMIR Mhealth Uhealth, 7, 6, e14273. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  323. E. Hudlicka. 2008. Affective computing for game design. In Proceedings of the 4th Intl. North American Conference on Intelligent Games and Simulation. McGill University Montreal, 5–12.Google ScholarGoogle Scholar
  324. C. L. Hull. 1943. Principles of Behavior. D. Appleton-Century, New York.Google ScholarGoogle Scholar
  325. C. L. Hull. 1951. Essentials of Behavior. Yale University Press, New Haven, CT.Google ScholarGoogle Scholar
  326. C. L. Hull. 1952. A Behavior System: An Introduction to Behavior Theory Concerning the Individual Organism. Yale University Press, New Haven, CT.Google ScholarGoogle Scholar
  327. S. Humphrey, A. Faghri, and M. Li. 2013. Health and transportation: The dangers and prevalence of road rage within the transportation system. Am. J. Civ. Eng. Arch. 1 (6), 156–163. ISSN: 2328-3998. http://pubs.sciepub.com/ajcea/1/6/5. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  328. C. Hutto and E. Gilbert. 2014. VADER: A parsimonious rule-based model for sentiment analysis of social media text. Proc. Int. AAAI Conf. Web Soc. Media 8, 1, 216–225.Google ScholarGoogle Scholar
  329. O. Ignatyeva, D. Sokolov, O. Lukashenko, A. Shalakitskaia, S. Denef, and T. Samsonowa. 2019. Business models for emerging technologies: The case of affective computing. In 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 350–355.Google ScholarGoogle Scholar
  330. T. Iio, M. Shiomi, K. Shinozawa, K. Shimohara, M. Miki, and N. Hagita. 2015. Lexical entrainment in human robot interaction. Int. J. Soc. Robot. 7 (2), 253–263. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  331. R. T. Ionescu, M. Popescu, and C. Grozea. 2013. Local learning to improve bag of visual words model for facial expression recognition. In Workshop on Challenges in Representation Learning, ICML. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.662.4620&rep=rep1&type=pdf.Google ScholarGoogle Scholar
  332. B. Irfan, A. Ramachandran, S. Spaulding, D. F. Glas, I. Leite, and K. L. Koay. 2019. Personalization in long-term human–robot interaction. In 2019 14th ACM/IEEE International Conference on Human–Robot Interaction (HRI). IEEE, 685–686. Google ScholarGoogle ScholarDigital LibraryDigital Library
  333. B. Irfan, A. Ramachandran, S. Spaulding, S. Kalkan, G. I. Parisi, and H. Gunes. 2021. Lifelong learning and personalization in long-term human–robot interaction (LEAP-HRI). In Companion of the 2021 ACM/IEEE International Conference on Human–Robot Interaction. 724–727. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  334. L. I. Ismail, T. Verhoeven, J. Dambre, and F. Wyffels. 2019. Leveraging robotics research for children with autism: A review. Int. J. Soc. Robot. 11 (3), 389–410.Google ScholarGoogle ScholarCross RefCross Ref
  335. G. Iyengar, H. J. Nock, and C. Neti. 2003. Audio-visual synchrony for detection of monologues in video archives. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP’03), Vol. 5. IEEE, V–772.Google ScholarGoogle Scholar
  336. A. B. Jacobs. September. 1993. Great streets. ACCESS Mag. 1 (3), 23–27. https://escholarship.org/uc/item/3t62h1fv.Google ScholarGoogle Scholar
  337. A. Jaimes and N. Sebe. 2007. Multimodal human–computer interaction: A survey. Comput. Vis. Image Underst. 108 (1–2), 116–134. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  338. L. G. Jaimes, M. Llofriu, and A. Raij. 2015. PREVENTER, a selection mechanism for just-in-time preventive interventions. IEEE Trans. Affect. Comput. 7 (3), 243–257. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  339. A. Jain. 1997. Feature selection: Evaluation, application, and small sample performance. IEEE Trans. Pattern Anal. Mach. Intell. 19 (2), 153–158. ISSN: 01628828. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  340. R. Jain and S. Bagdare. 2011. Music and consumption experience: A review. Int. J. Retail. Distrib. Manag. 39 (4), 289–302. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  341. W. James. 1992. Writings, 1878–1899. Library of America. Library of America. ISBN: 9780940450721. https://books.google.com/books?id=rr6kPc52tI4C.Google ScholarGoogle Scholar
  342. N. Jaques, S. Taylor, A. Azaria, A. Ghandeharioun, A. Sano, and R. Picard. 2015a. Predicting students’ happiness from physiology, phone, mobility, and behavioral data. In 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 222–228. Google ScholarGoogle ScholarDigital LibraryDigital Library
  343. N. Jaques, S. Taylor, A. Sano, and R. Picard. 2015b. Multi-task, multi-kernel learning for estimating individual wellbeing. In Multimodal Machine Learning Workshop in Conjunction with NIPS.Google ScholarGoogle Scholar
  344. N. Jaques, S. Taylor, A. Sano, and R. Picard. October. 2017. Multimodal autoencoder: A deep learning approach to filling in missing sensor data and enabling better mood prediction. In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 202–208. ISBN: 978-1-5386-0563-9. http://ieeexplore.ieee.org/document/8273601/. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  345. N. Jaques, A. Lazaridou, E. Hughes, C. Gulcehre, P. A. Ortega, D. J. Strouse, J. Z. Leibo, and N. de Freitas. 2019. Social influence as intrinsic motivation for multi-agent deep reinforcement learning. In Proceedings of the 36th International Conference on Machine Learning.Google ScholarGoogle Scholar
  346. S. Järvelä, D. Gašević, T. Seppänen, M. Pechenizkiy, and P. A. Kirschner. 2020. Bridging learning sciences, machine learning and affective computing for understanding cognition and affect in collaborative learning. Br. J. Educ. Technol. 51 (6), 2391–2406. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  347. S. Jeong and C. L. Breazeal. 2016. Improving smartphone users’ affect and wellbeing with personalized positive psychology interventions. In HAI 2016 Proceedings of the Fourth International Conference on Human Agent Interaction. Google ScholarGoogle ScholarDigital LibraryDigital Library
  348. S. Jerritta, M. Murugappan, R. Nagarajan, and K. Wan. March. 2011. Physiological signals based human emotion recognition: A review. In 2011 IEEE 7th International Colloquium on Signal Processing and its Applications. IEEE, 410–415. ISBN: 978-1-61284-414-5. http://ieeexplore.ieee.org/document/5759912/. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  349. S. Ji, Z. Wang, Q. Liu, and X. Liu. 2016. Classification algorithms for privacy preserving in data mining: A survey. In Advances in Computer Science and Ubiquitous Computing. Springer, 312–322.Google ScholarGoogle Scholar
  350. X. Jia, K. Li, X. Li, and A. Zhang. 2014. A novel semi-supervised deep learning framework for affective state recognition on EEG signals. In 2014 IEEE International Conference on Bioinformatics and Bioengineering. IEEE, 30–37. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  351. B. Jiang and C. Claramunt. 2002. Integration of space syntax into GIS: New perspectives for urban morphology. Trans GIS 6 (3), 295–309. ISSN: 1467-9671. https://onlinelibrary.wiley.com/doi/abs/10.1111/1467-9671.00112. DOI: . _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/1467-9671.00112.Google ScholarGoogle ScholarCross RefCross Ref
  352. S. Jirayucharoensak, S. Pan-Ngum, and P. Israsena. 2014. EEG-based emotion recognition using deep learning network with principal component based covariate shift adaptation. Sci. World J. 2014. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  353. S. Joglekar, D. Quercia, M. Redi, L. M. Aiello, T. Kauer, and N. Sastry. 2020. FaceLift: A transparent deep learning framework to beautify urban scenes. R. Soc. Open Sci. 7 (1), 190987. https://royalsocietypublishing.org/doi/full/10.1098/rsos.190987. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  354. D. G. Johnson and J. M. Mulvey. 1995. Accountability and computer decision systems. Commun. ACM 38 (12), 58–64. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  355. G. R. Jones and J. M. George. 1998. The experience and evolution of trust: Implications for cooperation and teamwork. Acad. Manag. Rev. 23 (3), 531–546. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  356. J. P. Jones and L. A. Palmer. 1987. An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. J. Neurophysiol. ISSN: 00223077. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  357. A. J. Jones and J. Pitt. 2011. On the classification of emotions and its relevance to the understanding of trust. In Proc. Work. Trust Agent Soc. 10th Int. Conf. Auton. Agents Multi-Agent Sys (AAMAS 2011). 69–82.Google ScholarGoogle Scholar
  358. M. F. Jung. 2017. Affective grounding in human–robot interaction. In Proceedings of the 2017 ACM/IEEE International Conference on Human–Robot Interaction. ACM, 263–273. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  359. M. Jung and P. Hinds. May. 2018. Robots in the wild: A time for more robust theories of human–robot interaction. ACM Trans. Hum. Robot Interact. 7 (1). DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  360. S. E. Kahou, X. Bouthillier, P. Lamblin, C. Gulcehre, V. Michalski, K. Konda, S. Jean, P. Froumenty, Y. Dauphin, N. Boulanger-Lewandowski, R. Chandias Ferrari, M. Mirza, D. Warde-Farley, A. Courville, P. Vincent, R. Memisevic, C. Pal, and Y. Bengio. June. 2016. EmoNets: Multimodal deep learning approaches for emotion recognition in video. J. Multimodal User Interfaces 10 (2), 99–111. ISSN 17838738. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  361. S. E. Kahou, C. Pal, X. Bouthillier, P. Froumenty, Ç. Gülçehre, R. Memisevic, P. Vincent, Courville, Y. Bengio, R. C. Ferrari, M. Mirza, S. Jean, P. L. Carrier, Y. Dauphin, N. Boulanger-Lewandowski, A. Aggarwal, J. Zumer, P. Lamblin, J. P. Raymond, G. Desjardins, R. Pascanu, D. Warde-Farley, A. Torabi, A. Sharma, E. Bengio, K. R. Konda, and Z. Wu. 2013. Combining modality specific deep neural networks for emotion recognition in video. In ICMI 2013—Proceedings of the 2013 ACM International Conference on Multimodal Interaction. ISBN: 9781450321297. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  362. T. Kalayci and O. Ozdamar. 1995. Wavelet preprocessing for automated neural network detection of EEG spikes. IEEE Eng. Med. Biol. Soc. 14 (2). 160–166. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  363. E. Kalbe, M. Schlegel, A. T. Sack, D. A. Nowak, M. Dafotakis, C. Bangard, M. Brand, S. Shamay-Tsoory, O. A. Onur, and J. Kessler. 2010. Dissociating cognitive from affective theory of mind: A TMS study. Cortex 46 (6), 769–780.Google ScholarGoogle ScholarCross RefCross Ref
  364. N. Kalchbrenner, E. Grefenstette, and P. Blunsom. 2014. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188.Google ScholarGoogle Scholar
  365. R. E. Kaliouby and P. Robinson. 2004. Real-time inference of complex mental states from facial expressions and head gestures. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  366. T. Kanade. 1977. Computer Recognition of Human Faces, Vol. 47. Birkhäuser Verlag, Basel and Stuttgart. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  367. T. Kanade, J. F. Cohn, and Y. Tian. 2000. Comprehensive database for facial expression analysis. In Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture. 46–53. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  368. A. Kapur, A. Kapur, N. Virji-Babul, G. Tzanetakis, and P. F. Driessen. 2005. Gesture-based affective computing on motion capture data. In International Conference on Affective Computing and Intelligent Interaction. Springer, 1–7. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  369. Y. Kato, T. Kanda, and H. Ishiguro. 2015. May I help you?—Design of human-like polite approaching behavior. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human–Robot Interaction. ACM, 35–42. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  370. J. Kätsyri, M. Mäkäräinen, and T. Takala. 2017. Testing the ‘uncanny valley’ hypothesis in semirealistic computer-animated film characters: An empirical evaluation of natural film stimuli. Int. J. Hum. Comput. Stud. 97, 149–161. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  371. T. Kauer, S. Joglekar, M. Redi, L. M. Aiello, and D. Quercia. September. 2018. Mapping and visualizing deep-learning urban beautification. IEEE Computer Graphics and Applications, 38 (5), 70–83. ISSN: 1558-1756. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  372. P. Kawde and G. K. Verma. 2017. Deep belief network based affect recognition from physiological signals. In 2017 4th IEEE Uttar Pradesh Section International Conference on Electrical, Computer and Electronics (UPCON). IEEE, 587–592. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  373. A. Kazemzadeh, S. Lee, P. G. Georgiou, and S. S. Narayanan. 2011. Emotion twenty questions: Toward a crowd-sourced theory of emotions. In Affective Computing and Intelligent Interaction. Springer, 1–10. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  374. L. C. Kegel, P. Brugger, S. Frü hholz, T. Grunwald, P. Hilfiker, O. Kohnen, M. L. Loertscher, D. Mersch, A. Rey, T. Sollfrank, B. K. Steiger, J. Sternagel, M. Weber, and H. Jokeit. 2020. Dynamic human and avatar facial expressions elicit differential brain responses. Soc. Cogn. Affect. Neurosci. 15 (3), 303–317. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  375. D. Keltner, J. L. Tracy, D. Sauter, and A. Cowen. 2019. What basic emotion theory really says for the twenty-first century study of emotion. J. Nonverbal. Behav. 43 (2), 195–201.Google ScholarGoogle ScholarCross RefCross Ref
  376. S. Kemp. 2020. Digital trends 2020: Every single stat you need to know about the internet. Retrieved September 8, 2020, from https://thenextweb.com/growth-quarters/2020/01/30/digital-trends-2020-every-single-stat-you-need-to-know-about-the-internet/.Google ScholarGoogle Scholar
  377. M. Keramati and B. Gutkin. 2011. A reinforcement learning theory for homeostatic regulation. In Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS’11. Curran Associates Inc., Red Hook, NY, 82–90. ISBN: 9781618395993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  378. M. Keramati and B. Gutkin. December. 2014. Homeostatic reinforcement learning for integrating reward collection and physiological stability. eLife 3, e04811. ISSN: 2050-084X. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  379. G. Keren, T. Kirschstein, E. Marchi, F. Ringeval, and B. Schuller. 2017. End-to-end learning for dimensional emotion recognition from physiological signals. In 2017 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 985–990.Google ScholarGoogle Scholar
  380. S. J. Kessler, F. Jiang, and R. A. Hurley. 2020. The state of automated facial expression analysis (AFEA) in evaluating consumer packaged beverages. Beverages 6 (2), 27. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  381. S. Khadka, S. Majumdar, T. Nassar, Z. Dwiel, E. Tumer, S. Miret, Y. Liu, and K. Tumer. 2019. Collaborative evolutionary reinforcement learning. In International Conference on Machine Learning. PMLR, 3341–3350.Google ScholarGoogle Scholar
  382. A. R. Kherlopian, J. P. Gerrein, M. Yue, K. E. Kim, J. W. Kim, M. Sukumaran, and P. Sajda. 2006. Electrooculogram based system for computer control using a multiple feature classification model. In 2006 International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 1295–1298.Google ScholarGoogle Scholar
  383. J. Kim and W. Winkler. 2003. Multiplicative noise for masking continuous data. Statistics 1, 9.Google ScholarGoogle Scholar
  384. K. H. Kim, S. W. Bang, and S. R. Kim. May. 2004. Emotion recognition system using short-term monitoring of physiological signals. Med. Biol. Eng. Comput. 42 (3), 419–427. ISSN: 0140-0118. http://link.springer.com/10.1007/BF02344719. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  385. E. O. Kimbrough and A. Vostroknutov. 2016. Norms make preferences social. J. Eur. Econ. Assoc. 14 (3), 608–638. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  386. R. Kirby, J. Forlizzi, and R. Simmons. 2010. Affective social robots. Rob. Auton. Syst. 58 (3), 322–332. Google ScholarGoogle ScholarDigital LibraryDigital Library
  387. D. Kirsch. 1997. The Sentic Mouse: Developing a Tool for Measuring Emotional Valence. Technical Report, MIT Media Laboratory Perceptual Computing Section.Google ScholarGoogle Scholar
  388. P. V. Klasnja, E. Hekler, S. Shiffman, A. Boruvka, D. Almirall, A. Tewari, and S. A. Murphy. 2015. Microrandomized trials: An experimental design for developing just-in-time adaptive interventions. Health Psychol. 34S, 1220–1228. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  389. J. Kleinberg, S. Mullainathan, and M. Raghavan. 2016. Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807.Google ScholarGoogle Scholar
  390. A. Kleinsmith and N. Bianchi-Berthouze. 2013. Affective body expression perception and recognition: A survey. IEEE Trans. Affect. Comput. 4, 15–33. ISSN: 19493045. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  391. R. Kohavi and G. H. John. 1997. Wrappers for feature subset selection. Artif. Intell. 97 (1–2), 273–324. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  392. D. Kollias and S. Zafeiriou. 2019. Expression, affect, action unit recognition: Aff-Wild2, multi-task learning and ArcFace. arXiv preprint arXiv:1910.04855.Google ScholarGoogle Scholar
  393. D. Kollias, M. A. Nicolaou, I. Kotsia, G. Zhao, and S. Zafeiriou. 2017. Recognition of affect in the wild using deep neural networks. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on. IEEE, 1972–1979.Google ScholarGoogle Scholar
  394. D. Kollias, P. Tzirakis, M. A. Nicolaou, A. Papaioannou, G. Zhao, B. Schuller, I. Kotsia, and S. Zafeiriou. 2019. Deep affect prediction in-the-wild: Aff-Wild database and challenge, deep architectures, and beyond. Int. J. Comput. Vis. 127 (6–7), 907–929. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  395. G. Konidaris and A. Barto. 2006. An adaptive robot motivational system. In Proceedings of the 9th International Conference on From Animals to Animats: Simulation of Adaptive Behavior, SAB’06. Springer-Verlag, Berlin, 346–356. ISBN: 3540386084. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  396. G. Konidaris and A. Barto. 2009. Skill discovery in continuous reinforcement learning domains using skill chaining. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta (Eds.), Advances in Neural Information Processing Systems, Vol. 22. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2009/file/e0cf1f47118daebc5b16269099ad7347-Paper.pdf. Google ScholarGoogle ScholarDigital LibraryDigital Library
  397. O. Korn, N. Akalin, and R. Gouveia. 2021. Understanding cultural preferences for social robots: A study in German and Arab communities. ACM Trans. Hum. Robot Interact. 10 (2), 1–19. Google ScholarGoogle ScholarDigital LibraryDigital Library
  398. J. B. Kostis, A. Moreyra, M. Amendo, J. Di Pietro, N. Cosgrove, and P. Kuo. 1982. The effect of age on heart rate in subjects free of heart disease. Studies by ambulatory electrocardiography and maximal exercise stress test. Circulation 65 (1), 141–145. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  399. T. Kostoulas, M. Muszynski, T. Chaspari, and P. Amelidis. 2020. Multimodal affect and aesthetic experience. In Proceedings of the 2020 International Conference on Multimodal Interaction. 888–889. Google ScholarGoogle ScholarDigital LibraryDigital Library
  400. A. D. Kramer, J. E. Guillory, and J. T. Hancock. 2014. Experimental evidence of massive-scale emotional contagion through social networks. Proc. Natl. Acad. Sci. 111 (24), 8788–8790. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  401. J. Kranjec, S. Beguš, G. Geršak, and J. Drnovšek. 2014. Non-contact heart rate and heart rate variability measurements: A review. Biomed. Signal Process. Control. 13, 102–112. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  402. B. Kratzwald and S. Feuerriegel. 2019. Putting question–answering systems into practice: Transfer learning for efficient domain customization. ACM Trans. Manag. Inf. Syst. 9 (4), 1–20. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  403. B. Kratzwald, S. Ilić, M. Kraus, S. Feuerriegel, and H. Prendinger. 2018. Deep learning for affective computing: Text-based emotion recognition in decision support. Decis. Support Syst. 115, 24–35. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  404. S. D. Kreibig. 2010. Autonomic nervous system activity in emotion: A review. Biol. Psychol. 84 (3), 394–421. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  405. A. Krizhevsky, I. Sutskever, and G. E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097–1105. https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf. Google ScholarGoogle ScholarDigital LibraryDigital Library
  406. K. Kroenke and R. L. Spitzer. 2002. The PHQ-9: A new depression diagnostic and severity measure. Psychiatr. Ann. 32, 9, 509–515. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  407. J. A. Kroll. 2020. Accountability in computer systems. The Oxford Handbook of Ethics of AI. Oxford University Press, 181. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  408. S. Kujala. 2003. User involvement: A review of the benefits and challenges. Behav. Inf. Technol. 22 (1), 1–16. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  409. Y. Kwon, J.-H. Won, B. J. Kim, and M. C. Paik. 2020. Uncertainty quantification using Bayesian neural networks in classification: Application to biomedical image segmentation. Comput. Stat. Data Anal. 142, 106816. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  410. C. Lacey and C. Caudwell. 2019. Cuteness as a ‘dark pattern’ in home robots. In 2019 14th ACM/IEEE International Conference on Human–Robot Interaction (HRI). IEEE, 374–381. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  411. C. Lai, B. Alex, J. D. Moore, L. Tian, T. Hori, and G. Francesca. 2019. Detecting topic-oriented speaker stance in conversational speech. In Proceedings of INTERSPEECH 2019. 46–50. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  412. P. J. Lang, M. M. Bradley, and B. N. Cuthbert. 1997. International Affective Picture System (IAPS): Technical manual and affective ratings. NIMH Center for the Study of Emotion and Attention, Gainesville, 39–58.Google ScholarGoogle Scholar
  413. A. Lanitis, C. J. Taylor, and T. F. Cootes. 1995. Automatic face identification system using flexible appearance models. Image Vis. Comput. 13, 5, 393–401. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  414. F. Larradet, R. Niewiadomski, G. Barresi, and L. S. Mattos. 2019. Appraisal theory-based mobile app for physiological data collection and labelling in the wild. In Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, 752–756. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  415. F. Larradet, R. Niewiadomski, G. Barresi, D. G. Caldwell, and L. S. Mattos. July. 2020. Toward emotion recognition from physiological signals in the wild: Approaching the methodological issues in real-life data collection. Front. Psychol. 11, 1111. ISSN: 1664-1078. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  416. J. Larus. 2020. Joint statement on contact tracing: Date 19th April 2020. https://drive.google.com/file/d/1OQg2dxPu-x-RZzETlpV3lFa259Nrpk1J/view.Google ScholarGoogle Scholar
  417. P. A. Lasota, G. F. Rossano, and J. A. Shah. 2014. Toward safe close-proximity human–robot interaction with standard industrial robots. In 2014 IEEE International Conference on Automation Science and Engineering (CASE). IEEE, 339–344. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  418. T. Lattimore and C. Szepesvári. 2020. Bandit Algorithms. Cambridge University Press. ISBN: 9781108486828. https://books.google.com/books?id=bbjpDwAAQBAJ.Google ScholarGoogle Scholar
  419. B. P. L. Lau, S. H. Marakkalage, Y. Zhou, N. U. Hassan, C. Yuen, M. Zhang, and U.-X. Tan. December. 2019. A survey of data fusion in smart city applications. Inf. Fusion 52, 357–374. ISSN: 1566-2535. http://www.sciencedirect.com/science/article/pii/S1566253519300326. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  420. G. Laurans, P. M. A. Desmet, and P. Hekkert. September. 2009. The emotion slider: A self-report device for the continuous measurement of emotion. In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops. IEEE, 1–6. ISBN 978-1-4244-4800-5. http://ieeexplore.ieee.org/document/5349539/. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  421. R. S. Lazarus and R. S. Lazarus. 1991. Emotion and Adaptation. Oxford University Press on Demand. Ledalab. http://www.ledalab.de/.Google ScholarGoogle Scholar
  422. A. Ledgerwood, C. K. Soderberg, and J. Sparks. 2017. Designing a study to maximize informational value. In M. C. Makel and J. A. Plucker (Eds.), Toward A More Perfect Psychology: Improving Trust, Accuracy, And Transparency in Research. American Psychological Association, 33–58. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  423. J. Lee and N. Moray. 1992. Trust, control strategies and allocation of function in human–machine systems. Ergonomics 35, 10, 1243–1270. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  424. C. C. Lee, E. Mower, C. Busso, S. Lee, and S. Narayanan. 2011. Emotion recognition using a hierarchical binary decision tree approach. Speech Commun. 53, 9–10, 1162–1171. ISSN: 01676393. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  425. J. S. Lerner and L. Z. Tiedens. 2006. Portrait of the angry decision maker: How appraisal tendencies shape anger’s influence on cognition. J. Behav. Decis. Mak. 19, 2, 115–137. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  426. J. S. Lerner, S. Han, and D. Keltner. 2007. Feelings and consumer decision making: Extending the Appraisal-Tendency Framework. J. Consum. Psychol. 17, 3, 181–187. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  427. R. Levitan, A. Gravano, L. Willson, S. Benus, J. Hirschberg, and A. Nenkova. 2012. Acoustic-prosodic entrainment and social behavior. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 11–19. Google ScholarGoogle ScholarDigital LibraryDigital Library
  428. M. Lewis. 2008. Self-conscious emotions-embarrassment, pride, shame, and guilt. In M. Lewis, J. Laviland-Jones, and L. Fildman Barret (Eds.), Handbook of Emotions, 3rd ed., The Guilford Press, 742–756.Google ScholarGoogle Scholar
  429. B. Li and A. Sano. 2020a. Extraction and interpretation of deep autoencoder-based temporal features from wearables for forecasting personalized mood, health, and stress. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4, 2, 1–26. ISSN: 24749567. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  430. B. Li and A. Sano. July. 2020b. Early versus late modality fusion of deep wearable sensor features for personalized prediction of tomorrow’s mood, health, and stress. In 2020 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC). IEEE, 5896–5899. ISBN: 978-1-7281-1990-8. https://ieeexplore.ieee.org/document/9175463/. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  431. Y. Li and N. Vasconcelos. 2019. Repair: Removing representation bias by dataset resampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9572–9581.Google ScholarGoogle Scholar
  432. N. Li, T. Li, and S. Venkatasubramanian. 2007. t-Closeness: Privacy beyond k-anonymity and l-diversity. In 2007 IEEE 23rd International Conference on Data Engineering. IEEE, 106–115. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  433. C. Li, C. Xu, and Z. Feng. 2016. Analysis of physiological for emotion recognition with the IRS model. Neurocomputing 178, 103–111. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  434. T. Li, Y. Baveye, C. Chamaret, E. Dellandréa, and L. Chen. 2015. Continuous arousal self-assessments validation using real-time physiological responses. In Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia. ACM, 39–44. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  435. P. Liao, K. Greenewald, P. V. Klasnja, and S. Murphy. 2019. Personalized heartsteps: A reinforcement learning algorithm for optimizing physical activity. ArXiv, abs/1909.03539.Google ScholarGoogle Scholar
  436. R. LiKamWa, Y. Liu, N. D. Lane, and L. Zhong. 2013. MoodScope: Building a mood sensor from smartphone usage patterns. In Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys’13. Association for Computing Machinery, New York, NY, 389–402. ISBN: 9781450316729. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  437. S. Lin, C. Hsu, W. Talamonti, Y. Zhang, S. Oney, J. Mars, and L. Tang. 2018. Adasa: A conversational in-vehicle digital assistant for advanced driver assistance features. In P. Baudisch, A. Schmidt, and A. Wilson (Eds.), The 31st Annual ACM Symposium on User Interface Software and Technology, UIST 2018. Berlin, Germany, October 14–17, 2018, 531–542. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  438. Y. Lindell. 2005. Secure multiparty computation for privacy preserving data mining. In Encyclopedia of Data Warehousing and Mining. IGI Global, 1005–1009. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  439. T. Lindenthal. 2017. Beauty in the eye of the home-owner: Aesthetic zoning and residential property values. Real Estate Econ. 48, 530–555. https://onlinelibrary.wiley.com/doi/full/10.1111/1540-6229.12204. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  440. K. A. Lindquist. 2013. Emotions emerge from more basic psychological ingredients: A modern psychological constructionist model. Emot. Rev. 5, 4, 356–368. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  441. K. A. Lindquist, J. K. MacCormack, and H. Shablack. 2015. The role of language in emotion: Predictions from psychological constructionism. Front. Psychol. 6, 444. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  442. Z. C. Lipton. 2018. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16, 3, 31–57. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  443. G. Littlewort, M. S. Bartlett, I. Fasel, J. Susskind, and J. Movellan. 2004. Dynamics of facial expression extracted automatically from video. In 2004 Conference on Computer Vision and Pattern Recognition Workshop. IEEE, 80–80. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  444. L. Liu, E. A. Silva, C. Wu, and H. Wang. September. 2017. A machine learning-based method for the large-scale evaluation of the qualities of the urban environment. Comput. Environ. Urban Syst. 65, 113–125. ISSN: 0198-9715. http://www.sciencedirect.com/science/article/pii/S0198971516301831. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  445. M. Liu, R. Wang, S. Li, S. Shan, Z. Huang, and X. Chen. November. 2014a. Combining multiple kernel methods on Riemannian manifold for emotion recognition in the wild. In Proceedings of the 16th International Conference on Multimodal Interaction. ACM, New York, NY, 494–501. ISBN: 9781450328852. https://dl.acm.org/doi/10.1145/2663204.2666274. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  446. S. Liu, D.-Y. Huang, W. Lin, M. Dong, H. Li, and E. P. Ong. 2014b. Emotional facial expression transfer based on temporal restricted Boltzmann machines. In Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific. IEEE, 1–7. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  447. S. Liu, D. Zhang, M. Xu, H. Qi, F. He, X. Zhao, P. Zhou, L. Zhang, and D. Ming. 2015. Randomly dividing homologous samples leads to overinflated accuracies for emotion recognition. Int. J. Psychophysiol. 96, 1, 29–37. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  448. W. Liu, W.-L. Zheng, and B.-L. Lu. 2016. Multimodal emotion recognition using multimodal deep learning. arXiv preprint arXiv:1602.08225. https://arxiv.org/abs/1602.08225.Google ScholarGoogle Scholar
  449. T. Liu, P. P. Liang, M. Muszynski, R. Ishii, D. Brent, R. Auerbach, N. Allen, and L.-P. Morency. 2020. Multimodal privacy-preserving mood prediction from mobile data: A preliminary study. arXiv preprint arXiv:2012.02359.Google ScholarGoogle Scholar
  450. S. R. Livingstone and F. A. Russo. 2018. The Ryerson audio-visual database of emotional speech and song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS One 13, 5, e0196391. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  451. B. Logan and J. Healey. 2006. Sensors to detect the activities of daily living. In 2006 International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 5362–5365. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  452. A. Lotz, K. Ihme, A. Charnoz, P. Maroudis, I. Dmitriev, and A. Wendemuth. 2018. Recognizing behavioral factors while driving: A real-world multimodal corpus to monitor the driver’s affective state. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) LREC, Miyazaki, Japan.Google ScholarGoogle Scholar
  453. C. Lutz and A. Tamò-Larrieux. 2020. The robot privacy paradox: Understanding how privacy concerns shape intentions to use social robots. Hum. Mach. Commun. J. 1, 1, 87–111. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  454. M. L. Lyon. 1995. Missing emotion: The limitations of cultural constructionism in the study of emotion. Cult. Anthropol. 10, 2, 244–263. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  455. M. J. Lyons, J. Budynek, and S. Akamatsu. 1999. Automatic classification of single facial images. IEEE Trans. Pattern Anal. Mach. Intell. 21, 12, 1357–1362. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  456. M. A. Madaio, R. Lasko, J. Cassell, and A. Ogan. 2017. Using temporal association rule mining to predict dyadic rapport in peer tutoring. In Proceedings of the 10th International Conference on Educational Data Mining.Google ScholarGoogle Scholar
  457. K. Makantasis, A. Liapis, and G. N. Yannakakis. 2021. The pixels and sounds of emotion: General-purpose representations of arousal in games. IEEE Transactions on Affective Computing. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  458. M. Malik, A. J. Camm, J. T. Bigger, G. Breithardt, S. Cerutti, R. J. Cohen, P. Coumel, E. L. Fallen, H. L. Kennedy, R. E. Kleiger, F. Lombardi, A. Malliani, A. J. Moss, J. N. Rottman, G. Schmidt, P. J. Schwartz, and D. H. Singer. 1996. Heart rate variability. Standards of measurement, physiological interpretation, and clinical use. Eur. Heart J. 17, 3, 354–381. ISSN: 0195668X.Google ScholarGoogle ScholarCross RefCross Ref
  459. M. Mansoorizadeh and N. M. Charkari. 2010. Multimodal information fusion application to human emotion recognition from face and speech. Multimed. Tools Appl. 49, 2, 277–297. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  460. V. Marda and S. Ahmed. 2021. Emotional entanglement: China’s emotion recognition market and its implications for human rights. Retrieved March 07, 2021, from https://www.article19.org/wp-content/uploads/2021/01/ER-Tech-China-Report.pdf.Google ScholarGoogle Scholar
  461. J. Marín-Morales, J. L. Higuera-Trujillo, A. Greco, J. Guixeres, C. Llinares, E. P. Scilingo, M. Alcañiz, and G. Valenza. December. 2018. Affective computing in virtual reality: Emotion recognition from brain and heartbeat dynamics using wearable sensors. Sci. Rep. 8, 1, 13657. ISSN: 2045-2322. http://www.nature.com/articles/s41598-018-32063-4. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  462. J. Marín-Morales, C. Llinares, J. Guixeres, and M. Alcañiz. 2020. Emotion recognition in immersive virtual reality: From statistics to affective computing. Sensors 20, 18, 1–26. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  463. R. P. Marinier and J. E. Laird. 2008. Emotion-driven reinforcement learning. Proc. Annu. Meet. Cogn. Sci. Soc. 30, 115–120.Google ScholarGoogle Scholar
  464. R. P. Marinier, J. E. Laird, and R. L. Lewis. 2009. A computational unification of cognitive behavior and emotion. Cogn. Syst. Res. 10, 1, 48–69. ISSN: 1389-0417. https://www.sciencedirect.com/science/article/pii/S1389041708000302. Google ScholarGoogle ScholarDigital LibraryDigital Library
  465. S. Marsella and J. Gratch. 2002. A step toward irrationality: Using emotion to change belief. In Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems: Part 1. 334–341. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  466. S. C. Marsella and J. Gratch. 2009. EMA: A process model of appraisal dynamics. Cogn. Syst. Res. 10, 1, 70–90. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  467. S. Marsella, J. Gratch, and P. Petta. 2010. Computational models of emotion. In A Blueprint for Affective Computing—A Sourcebook and Manual. Oxford University Press, 21–46.Google ScholarGoogle Scholar
  468. H. P. Martinez, Y. Bengio, and G. N. Yannakakis. 2013. Learning deep physiological models of affect. IEEE Comput. Intell. Mag. 8, 2, 20–33. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  469. K. Mase. 1991. Recognition of facial expression from optical flow. IEICE Trans. Inf. Syst. 74, 10, 3474–3483. https://search.ieice.org/bin/summary.php?id=e74-d_10_3474.Google ScholarGoogle Scholar
  470. I. B. Mauss and M. D. Robinson. 2009. Measures of emotion: A review. Cogn. Emot. 23, 2, 209–237. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  471. S. E. Maxwell, M. Y. Lau, and G. S. Howard. 2015. Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? Am. Psychol. 70, 6, 487–498. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  472. R. R. McCrae and O. P. John. 1992. An introduction to the five-factor model and its applications. J. Pers. 60, 2, 175–215. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  473. J. H. McDonald. 2009. Handbook of Biological Statistics. Vol. 2. Sparky House Publishing, Baltimore, MD.Google ScholarGoogle Scholar
  474. D. McDuff, K. Rowan, P. Choudhury, J. Wolk, T. Pham, and M. Czerwinski. 2019. A multimodal emotion sensing platform for building emotion-aware applications. ArXiv, abs/1903.12133.Google ScholarGoogle Scholar
  475. D. M. McNair, M. Lorr, and L. F. Droppleman. 1971. EdITS Manual for the Profile of Mood States (POMS). Educational and industrial testing service.Google ScholarGoogle Scholar
  476. N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan. 2019. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635.Google ScholarGoogle Scholar
  477. G. Meinlschmidt, J.-H. Lee, E. Stalujanis, A. Belardi, M. Oh, E. K. Jung, H.-C. Kim, J. Alfano, S.-S. Yoo, and M. Tegethoff. 2016. Smartphone-based psychotherapeutic micro-interventions to improve mood in a real-world setting. Front. Psychol. 7, 1112. ISSN: 1664-1078. https://www.frontiersin.org/article/10.3389/fpsyg.2016.01112. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  478. E. L. Melanson and P. S. Freedson. 2001. The effect of endurance training on resting heart rate variability in sedentary adult males. Eur. J. Appl. Physiol. 85, 5, 442–449. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  479. R. Mendes and J. P. Vilela. 2017. Privacy-preserving data mining: Methods, metrics, and applications. IEEE Access 5, 10562–10582. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  480. J. Meyerowitz and R. Roy Choudhury. 2009. Hiding stars with fireworks: Location privacy through camouflage. In Proceedings of the 15th Annual International Conference on Mobile Computing and Networking. 345–356. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  481. K. Mikolajczyk and C. Schmid. 2005. A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 27, 10, 1615–1630. ISSN: 01628828. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  482. T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.Google ScholarGoogle Scholar
  483. T. Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267, 1–38. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  484. G. A. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K. J. Miller. 1990. Introduction to WordNet: An on-line lexical database. Int. J. Lexicogr. 3, 4, 235–244. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  485. W. Min, S. Mei, L. Liu, Y. Wang, and S. Jiang. 2019. Multi-task deep relative attribute learning for visual urban perception. IEEE Trans. Image Process. 29, 657–669. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  486. A. Miner, A. Chow, S. Adler, I. Zaitsev, P. Tero, A. Darcy, and A. Paepcke. 2016. Conversational agents and mental health: Theory-informed assessment of language and affect. In HAI 2016 Proceedings of the Fourth International Conference on Human Agent Interaction. 123–130. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  487. M. L. Minsky. 1988. The Society of Mind. Simon & Schuster. Google ScholarGoogle ScholarDigital LibraryDigital Library
  488. D. Mobbs, C. C. Hagan, T. Dalgleish, B. Silston, and C. Prévost. 2015. The ecology of human fear: Survival optimization and the nervous system. Front. Neurosci. 9, 55. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  489. T. M. Moerland, J. Broekens, and C. M. Jonker. February. 2018. Emotion in reinforcement learning agents and robots: A survey. Mach. Learn. 107, 2, 443–480. ISSN: 0885-6125. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  490. S. M. Mohammad and S. Kiritchenko. 2013. Using hashtags to capture fine emotion categories from tweets. Comput. Intell. 31, 2. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  491. D. C. Mohr, M. Zhang, and S. M. Schueller. 2017. Personal sensing: Understanding mental health using ubiquitous sensors and machine learning. Annu. Rev. Clin. Psychol. 13, 23–27. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  492. N. Moraveji, O. Ben, T. Nguyen, M. Saadat, Y. Khalighi, R. Pea, and J. Heer. 2011. Peripheral paced respiration: Influencing user physiology during information work. In UIST’11—Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. ISBN: 9781450307161. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  493. C. L. Morgan. 1903. Instinct and intelligence. In An Introduction to Comparative Psychology. Walter Scott Publishing, London, 197–216.Google ScholarGoogle Scholar
  494. C. J. Morgan. 2017. Use of proper statistical techniques for research studies with small samples. Am. J. Physiol. Lung Cell. Mol. Physiol. 313, 5, L873–L877. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  495. M. Morris and A. Aguilera. 2012. Mobile, social, and wearable computing and the evolution of psychological practice. Prof. Psychol. Res. Pract. 43, 6, 622–626. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  496. M. E. Morris, Q. Kathawala, T. K. Leen, E. E. Gorenstein, F. Guilak, M. Labhard, and W. Deleeuw. 2010. Mobile therapy: Case study evaluations of a cell phone application for emotional self-awareness. J. Med. Internet Res. 12, e10. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  497. S. T. Moturu, I. Khayal, N. Aharony, W. Pan, and A. Pentland. 2011. Using social sensing to understand the links between sleep, mood, and sociability. In 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing. 208–214. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  498. L. Mou, Z. Meng, R. Yan, G. Li, Y. Xu, L. Zhang, and Z. Jin. 2016. How transferable are neural networks in NLP applications? arXiv preprint arXiv:1603.06111.Google ScholarGoogle Scholar
  499. A. J. Moye and M. K. Van Vugt. 2019. A computational model of focused attention meditation and its transfer to a sustained attention task. IEEE Trans. Affect. Comput. 12, 329–339. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  500. C. Mumenthaler, D. Sander, and A. S. Manstead. 2018. Emotion recognition in simulated social interactions. IEEE Trans. Affect. Comput. 11, 2, 308–312. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  501. M. Muszynski, T. Kostoulas, P. Lombardo, T. Pun, and G. Chanel. 2018. Aesthetic highlight detection in movies based on synchronization of spectators’ reactions. ACM Trans. Multimedia Comput. Commun. Appl. 14, 3, 68. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  502. M. Muszynski, L. Tian, C. Lai, J. Moore, T. Kostoulas, P. Lombardo, T. Pun, and G. Chanel. 2019. Recognizing induced emotions of movie audiences from multimodal information. IEEE Trans. Affect. Comput. 12, 1, 36–52. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  503. N. Napi, A. Zaidan, B. Zaidan, O. Albahri, M. Alsalem, and A. Albahri. 2019. Medical emergency triage and patient prioritisation in a telemedicine environment: A systematic review. Health Technol. 9, 5, 679–700. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  504. S. Narayanan and P. G. Georgiou. 2013. Behavioral signal processing: Deriving human behavioral informatics from speech and language. Proc. IEEE Inst. Electr. Electron Eng. 101, 5, 1203–1233. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  505. V. Narayanan, B. M. Manoghar, and A. Bera. 2020. EWareNet: Emotion aware human intent prediction and adaptive spatial profile fusion for social robot navigation. arXiv preprint arXiv:2011.09438.Google ScholarGoogle Scholar
  506. F. Nasoz, K. Alvarez, C. L. Lisetti, and N. Finkelstein. 2004. Emotion recognition from physiological signals using wireless sensors for presence technologies. Cogn. Technol. Work 6, 1, 4–14. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  507. A. V. Nefian, L. Liang, X. Pi, X. Liu, and K. Murphy. 2002. Dynamic Bayesian networks for audio-visual speech recognition. EURASIP J. Adv. Signal Process. 2002, 11, 783042. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  508. A. Y. Ng and S. J. Russell. 2000. Algorithms for inverse reinforcement learning. In ICML 1, 663–670. Google ScholarGoogle ScholarDigital LibraryDigital Library
  509. K. Nickel, T. Gehrig, R. Stiefelhagen, and J. McDonough. 2005. A joint particle filter for audio-visual speaker tracking. In Proceedings of the 7th International Conference on Multimodal Interfaces. ACM, 61–68. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  510. M. A. Nicolaou, H. Gunes, and M. Pantic. 2010. Audio-visual classification and fusion of spontaneous affective data in likelihood space. In Proceedings—International Conference on Pattern Recognition. 3695–3699. ISBN 9780769541099. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  511. P. M. Niedenthal and F. Ric. 2017. Psychology of Emotion. Psychology Press. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  512. P. M. Niedenthal, M. Rychlowska, A. Wood, and F. Zhao. 2018. Heterogeneity of long-history migration predicts smiling, laughter and positive emotion across the globe and within the United States. PLoS One 13, 8, e0197651. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  513. R. Nielek, M. Ciastek, and W. Kopeć. August. 2017. Emotions make cities live: Towards mapping emotions of older adults on urban space. In Proceedings of the International Conference on Web Intelligence, WI’17. Association for Computing Machinery, Leipzig, Germany, 1076–1079. ISBN 978-1-4503-4951-2. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  514. P. A. Nogueira, R. Rodrigues, E. Oliveira, and L. E. Nacke. 2015. Modelling human emotion in interactive environments: Physiological ensemble and grounded approaches for synthetic agents. In Web Intelligence, Vol. 13. IOS Press, 195–214. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  515. D. A. Norman and S. W. Draper. 1986. User Centered System Design: New Perspectives on Human–Computer Interaction. CRC Press. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  516. F. Noroozi, D. Kaminska, C. Corneanu, T. Sapinski, S. Escalera, and G. Anbarjafari. 2018. Survey on emotional body gesture recognition. IEEE Trans. Affect. Comput. 12, 505–523. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  517. E. Nosakhare and R. Picard. March. 2020. Toward assessing and recommending combinations of behaviors for improving health and well-being. ACM Trans. Comput. Healthc. 1, 1, 1–29. ISSN: 2691-1957. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  518. M. Nussbaum and A. Sen. 1993. The Quality of Life. Clarendon Press.Google ScholarGoogle Scholar
  519. K. Oatley and P. N. Johnson-Laird. 1987. Towards a cognitive theory of emotions. Cogn. Emot. 1, 1, 29–50. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  520. M. Ochs, R. Niewiadomski, and C. Pelachaud. 2015. 18 Facial expressions of emotions for virtual characters. In The Oxford Handbook of Affective Computing. Oxford University Press, 261–272.Google ScholarGoogle Scholar
  521. A. Ogarkova. 2016. Translatability of emotions. In Emotion Measurement. Elsevier, 575–599. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  522. S. R. Oliveira and O. R. Zaiane. 2002. Privacy preserving frequent itemset mining. In Proceedings of the IEEE International Conference on Privacy, Security and Data Mining. Vol. 14, 43–54. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  523. S. R. Oliveira and O. R. Zaiane. 2010. Privacy preserving clustering by data transformation. J. Inf. Data Manag. 1, 1, 37–51. https://periodicos.ufmg.br/index.php/jidm/article/view/32.Google ScholarGoogle Scholar
  524. S. Ollander, C. Godin, A. Campagne, and S. Charbonnier. October. 2016. A comparison of wearable and stationary sensors for stress detection. In 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). 004362–004366. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  525. A. Ortony and G. Clore. 2015. Can an appraisal model be compatible with psychological constructionism. In The Psychological Construction of Emotion. Guilford, New York, NY, 305–333.Google ScholarGoogle Scholar
  526. A. Ortony and T. J. Turner. 1990. What’s basic about basic emotions? Psychol. Rev. 97, 3, 315–331. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  527. A. Ortony, G. L. Clore, and A. Collins. 1990. The Cognitive Structure of Emotions. Cambridge University Press. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  528. A. J. O’Toole, J. Harms, S. L. Snow, D. R. Hurst, M. R. Pappas, J. H. Ayyad, and H. Abdi. 2005. A video database of moving faces and people. IEEE Trans. Pattern Anal. Mach. Intell. 27, 5, 812–816. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  529. S. Oviatt. 2013. The Design of Future Educational Interfaces. Routledge. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  530. S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, and A. Krüger. 2018. The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition. Morgan & Claypool. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  531. A. Paiva. 2000. Affective interactions: Toward a new generation of computer interfaces? In A. Paiva (Ed.), International Workshop on Affective Interactions. Springer, Berlin, 1–8. .Google ScholarGoogle ScholarDigital LibraryDigital Library
  532. X. Pan and A. F. de C. Hamilton. 2018. Why and how to use virtual reality to study human social interaction: The challenges of exploring a new research landscape. Br. J. Psychol. 109, 3, 395–417. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  533. S. J. Pan and Q. Yang. 2009. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 10, 1345–1359. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  534. B. Pang and L. Lee. 2008. Opinion mining and sentiment analysis. Found. Trends Inf. Ret. 2, 1–2, 1–135. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  535. J. Panksepp and D. Watt. 2011. What is basic about basic emotions? Lasting lessons from affective neuroscience. Emot. Rev. 3, 4, 387–396. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  536. M. Pantic and L. Ü. M. Rothkrantz. 2000. Automatic analysis of facial expressions: The state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1424–1445. ISSN: 01628828. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  537. M. Pantic, I. Patras, and L. Rothkruntz. 2002. Facial action recognition in face profile image sequences. In Proceedings—2002 IEEE International Conference on Multimedia and Expo, ICME 2002. ISBN: 0780373049. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  538. M. Pantic, N. Sebe, J. F. Cohn, and T. Huang. 2005a. Affective multimodal human–computer interaction. In Proceedings of the 13th Annual ACM International Conference on Multimedia. ACM, 669–676. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  539. M. Pantic, M. Valstar, R. Rademaker, and L. Maat. 2005b. Web-based database for facial expression analysis. In Proceedings International Conference on Multimedia and Expo. IEEE. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  540. P. Paredes, R. Giald-Bachrach, M. Czerwinski, A. Roseway, K. Rowan, and J. Hernandez. 2014. PopTherapy: Coping with stress through pop-culture. In PervasiveHealth ’14: Proceedings of the 8th International Conference on Pervasive Computing Technologies for Healthcare. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  541. G. I. Parisi, J. Tani, C. Weber, and S. Wermter. 2017. Lifelong learning of human actions with deep neural network self-organization. Neural Netw. 96, 137–149. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  542. G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter. 2019. Continual lifelong learning with neural networks: A review. Neural Netw. 113, 54–71. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  543. H. W. Park, I. Grover, S. Spaulding, L. Gomez, and C. Breazeal. 2019. Multimodal coordination of facial action, head rotation, and eye motion during spontaneous smiles. In Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  544. J. R. Parks and J. L. Schofer. July. 2006. Characterizing neighborhood pedestrian environments with secondary data. Transp. Res. D Transp. Environ. 11, 4, 250–263. ISSN: 1361-9209. http://www.sciencedirect.com/science/article/pii/S1361920906000277. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  545. D. L. Paulhus and K. M. Williams. 2002. The dark triad of personality: Narcissism, Machiavellianism, and psychopathy. J. Res. Pers. 36, 556–563. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  546. A. Pavlenko. 2014. The Bilingual Mind: And What It Tells Us about Language and Thought. Cambridge University Press. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  547. I. P. Pavlov. 1927. Conditioned reflexes: An investigation of the physiological activity of the cerebral cortex. Nature 121, 3052, 662–664.Google ScholarGoogle Scholar
  548. H. Peng, F. Long, and C. Ding. August. 2005. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 27, 8, 1226–1238. ISSN: 01628828. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  549. J. W. Pennebaker, M. E. Francis, and R. J. Booth. 2001. Linguistic Inquiry and Word Count: LIWC 2001. Lawrence Erlbaum Associates, Mahway, NJ, 71, 2001, 2001.Google ScholarGoogle Scholar
  550. A. Pentland, D. Lazer, D. Brewer, and T. Heibeck. 2009. Using reality mining to improve public health and medicine. Stud. Health Technol. Inform. 149, 93–102.Google ScholarGoogle Scholar
  551. V. Perez Rosas, R. Mihalcea, and L. P. Morency. 2013. Multimodal sentiment analysis of Spanish online videos. IEEE Intell. Syst. 28, 38–45. ISSN: 15411672. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  552. L. Pessoa. 2018. Emotion and the interactive brain: Insights from comparative neuroanatomy and complex systems. Emot. Rev. 10, 3, 204–216. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  553. S. Petridis and M. Pantic. 2008. Audiovisual discrimination between laughter and speech. In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 5117–5120. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  554. P. Petta, C. Pelachaud, and R. Cowie. 2011. Emotion-Oriented Systems: The Humaine Handbook. Springer. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  555. A. J. Phillips, W. M. Clerx, C. S. O’Brien, A. Sano, L. K. Barger, R. W. Picard, S. W. Lockley, E. B. Klerman, and C. A. Czeisler. 2017. Irregular sleep/wake patterns are associated with poorer academic performance and delayed circadian and sleep/wake timing. Sci. Rep. ISSN: 20452322. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  556. S. Piana, A. Staglianò, A. Camurri, and F. Odone. 2013. A set of full-body movement features for emotion recognition to help children affected by autism spectrum condition. In IDGEI International Workshop. http://fdg2013.org/program/workshops/papers/IDGEI2013/idgei2013_4.pdfGoogle ScholarGoogle Scholar
  557. S. Piana, A. Stagliano, F. Odone, A. Verri, and A. Camurri. 2014. Real-time automatic emotion recognition from body gestures. arXiv preprint arXiv:1402.5047.Google ScholarGoogle Scholar
  558. R. W. Picard. 1995. Affective Computing. MIT Media Laboratory Perceptual Computing Section Technical Report No. 321. Cambridge, MA, 2139.Google ScholarGoogle Scholar
  559. R. W. Picard. 1997. Affective Computing. MIT Press. Google ScholarGoogle ScholarCross RefCross Ref
  560. R. W. Picard. 2000. Affective Computing. MIT Press. Google ScholarGoogle ScholarCross RefCross Ref
  561. R. W. Picard. 2011. Measuring affect in the wild. In International Conference on Affective Computing and Intelligent Interaction. Springer, 3–3. Google ScholarGoogle ScholarDigital LibraryDigital Library
  562. R. W. Picard and J. Healey. 1997a. Affective wearables. In Proceedings of the 1st IEEE International Symposium on Wearable Computers, ISWC’97. IEEE Computer Society, 90–97. ISBN: 0818681926. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  563. R. W. Picard and J. Healey. 1997b. Affective wearables. Pers. Technol. 1, 4, 231–240.Google ScholarGoogle ScholarCross RefCross Ref
  564. R. W. Picard, E. Vyzas, and J. Healey. 2001. Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Trans. Pattern Anal. Mach. Intell. 23, 10, 1175–1191. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  565. A. Pika, M. T. Wynn, S. Budiono, A. H. M. ter Hofstede, W. M. P. van der Aalst, and H. A. Reijers. 2019. Towards privacy-preserving process mining in healthcare. In International Conference on Business Process Management. Springer, 483–495.Google ScholarGoogle Scholar
  566. V. Pitsikalis, A. Katsamanis, G. Papandreou, and P. Maragos. 2006. Adaptive multimodal fusion by uncertainty compensation. In INTERSPEECH 2006 and 9th International Conference on Spoken Language Processing, INTERSPEECH 2006—ICSLP. ISBN 9781604234497.Google ScholarGoogle Scholar
  567. R. Plutchik. 1984. Emotions: A general psychoevolutionary theory. In Approaches to Emotion, chapter 8. Psychology Press, 197–219.Google ScholarGoogle Scholar
  568. S. Poria, E. Cambria, and A. Gelbukh. 2015. Deep convolutional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analysis. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 2539–2544. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  569. S. Poria, I. Chaturvedi, E. Cambria, and A. Hussain. 2016. Convolutional MKL based multimodal emotion recognition and sentiment analysis. In 2016 IEEE 16th International Conference on Data Mining (ICDM). IEEE, 439–448. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  570. S. Poria, E. Cambria, R. Bajpai, and A. Hussain. September. 2017. A review of affective computing: From unimodal analysis to multimodal fusion. Inf. Fusion 37, 98–125. ISSN: 1566-2535. http://www.sciencedirect.com/science/article/pii/S1566253517300738. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  571. S. Poria, N. Majumder, R. Mihalcea, and E. Hovy. 2019. Emotion recognition in conversation: Research challenges, datasets, and recent advances. IEEE Access 7, 100943–100953. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  572. S. Poria, D. Hazarika, N. Majumder, and R. Mihalcea. 2020. Beneath the tip of the iceberg: Current challenges and new directions in sentiment analysis research. IEEE Trans. Affect. Comput. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  573. J. Posner, J. A. Russell, and B. S. Peterson. 2005. The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Dev. Psychopathol. 17, 3, 715–734. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  574. D. Premack and G. Woodruff. 1978. Does the chimpanzee have a theory of mind? Behav. Brain Sci. 1, 4, 515–526. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  575. S. D. Preston and F. B. M. de Waal. 2002. Empathy: Its ultimate and proximate bases. Behav. Brain Sci. 25, 1 , 1–20. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  576. J. Qin, X. Zhou, C. Sun, H. Leng, and Z. Lian. 2013. Influence of green spaces on environmental satisfaction and physiological status of urban residents. Urban For. Urban Green. 12, 4, 490–497. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  577. D. Quercia, N. K. O’Hare, and H. Cramer. 2014. Aesthetic capital: What makes London look beautiful, quiet, and happy? In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing—CSCW’14. ACM Press, Baltimore, MD, 945–955. ISBN 978-1-4503-2540-0. http://dl.acm.org/citation.cfm?doid=2531602.2531613. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  578. M. Rabbi, A. Pfammatter, M. Zhang, B. Spring, and T. Choudhury. 2015a. Automated personalized feedback for physical activity and dietary behavior change with mobile phones: A randomized controlled trial on adults. JMIR Mhealth Uhealth 3, 2, e42. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  579. M. Rabbi, M. H. Aung, M. Zhang, and T. Choudhury. 2015b. MyBehavior: Automatic personalized health feedback from user behaviors and preferences using smartphones. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing. 707–718. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  580. M. Raghavan, S. Barocas, J. Kleinberg, and K. Levy. 2020. Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 469–481. .Google ScholarGoogle ScholarDigital LibraryDigital Library
  581. N. Rajcic and J. McCormack. 2020. Mirror ritual: An affective interface for emotional self-reflection. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  582. D. Ramachandran and E. Amir. 2007. Bayesian inverse reinforcement learning. In IJCAI, Vol. 7, 2586–2591. https://www.aaai.org/Papers/IJCAI/2007/IJCAI07-416.pdf. Google ScholarGoogle ScholarDigital LibraryDigital Library
  583. M. A. Rana, M. Mukadam, S. R. Ahmadzadeh, S. Chernova, and B. Boots. 13–15 Nov 2017. Towards robust skill generalization: Unifying learning from demonstration and motion planning. In S. Levine, V. Vanhoucke, and K. Goldberg (Eds.), Proceedings of the 1st Annual Conference on Robot Learning, Vol. 78 of Proceedings of Machine Learning Research. PMLR. 109–118. https://proceedings.mlr.press/v78/rana17a.html.Google ScholarGoogle Scholar
  584. P. Rani, C. Liu, N. Sarkar, and E. Vanman. 2006. An empirical study of machine learning techniques for affect recognition in human–robot interaction. Pattern Anal. Appl. 9, 1, 58–69. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  585. R. Rao and R. Derakhshani. 2005. A comparison of EEG preprocessing methods using time delay neural networks. In Conference Proceedings. 2nd International IEEE EMBS Conference on Neural Engineering. IEEE, 2005, 262–264. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  586. M. Redi, L. M. Aiello, R. Schifanella, and D. Quercia. November. 2018. The spirit of the city: Using social media to capture neighborhood ambiance. Proc. ACM Hum. Comput. Interact. 2, 144, 1–18. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  587. Research and Markets. 2021. Affective Computing Market—Growth, Trends, COVID-19 Impact, and Forecasts (2021–2026). Retrieved March 23, 2021, from https://www.researchandmarkets.com/reports/4602229/affective-computing-market-growth-trends#rela3-4396321.Google ScholarGoogle Scholar
  588. I. M. Rezazadeh, M. Firoozabadi, H. Hu, and S. M. R. H. Golpayegani. 2012. Co-adaptive and affective human–machine interface for improving training performances of virtual myoelectric forearm prosthesis. IEEE Trans. Affect. Comput. 3, 3, 285–297. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  589. L. Rhue. 2018. Racial influence on automated perceptions of emotions. Available at SSRN 3281765.Google ScholarGoogle Scholar
  590. T. Ribeiro and A. Paiva. 2012. The illusion of robotic life: Principles and practices of animation for robots. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human–Robot Interaction. 383–390. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  591. M. T. Ribeiro, S. Singh, and C. Guestrin. 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, CA, August 13–17, 2016, 1135–1144. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  592. S. Richardson. 2020. Affective computing in the modern workplace. Bus. Inf. Rev. 37, 2, 78–85. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  593. L. D. Riek. 2012. Wizard of Oz studies in HRI: A systematic review and new reporting guidelines. J. Hum. Robot Interact. 1, 1, 119–136. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  594. M. B. Ring. 1994. Continual Learning in Reinforcement Environments. Ph.D. thesis. University of Texas at Austin, Austin, TX. 78712. Google ScholarGoogle ScholarDigital LibraryDigital Library
  595. L. Ring, T. Bickmore, and P. Pedrelli. 2016. An affectively aware virtual therapist for depression counseling. In Proceedings of the CHI 2016 Workshop on Computing and Mental Health. http://relationalagents.com/publications/CHI2016-MentalHealth.pdf.Google ScholarGoogle Scholar
  596. F. Ringeval, A. Sonderegger, J. Sauer, and D. Lalanne. 2013. Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). 1–8. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  597. F. Ringeval, B. Schuller, M. Valstar, N. Cummins, R. Cowie, L. Tavabi, M. Schmitt, S. Alisamir, S. Amiriparian, E.-M. Messner, S. Song, S. Liu, Z. Zhao, A. Mallol-Ragolta, Z. Ren, M. Soleymani, and M. Pantic. 2019. AVEC 2019 workshop and challenge: State-of-mind, detecting depression with AI, and cross-cultural affect recognition. In Proceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop. 3–12. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  598. G. Riva and M. Mauri. 2021. MuMMER: How robotics can reboot social interaction and customer engagement in shops and malls. Cyberpsychol. Behav. Soc. Netw. 24, 3, 210–211. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  599. G. Riva, F. Mantovani, C. S. Capideville, A. Preziosa, F. Morganti, D. Villani, A. Gaggioli, C. Botella, and M. Alcañiz. February. 2007. Affective interactions using virtual reality: The link between presence and emotions. Cyberpsychol. Behav. 10, 1, 45–56. ISSN: 1094-9313. http://www.liebertpub.com/doi/10.1089/cpb.2006.9993. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  600. G. Rizos and B. W. Schuller. 2020. Average Jane, where art thou? —Recent avenues in efficient machine learning under subjectivity uncertainty. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems. Springer, 42–55. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  601. P. Robinette, W. Li, R. Allen, A. M. Howard, and A. R. Wagner. 2016. Overtrust of robots in emergency evacuation scenarios. In The Eleventh ACM/IEEE International Conference on Human Robot Interaction. IEEE Press, 101–108. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  602. N. L. Robinson, T. V. Cottier, and D. J. Kavanagh. 2019. Psychosocial health interventions by social robots: Systematic review of randomized controlled trials. J. Med. Internet Res. 21, 5, e13203. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  603. M. Robnik-Šikonja and M. Bohanec. 2018. Perturbation-based explanations of prediction models. In Human and Machine Learning. Human–Computer Interaction Series, Springer, Cham, 159–175. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  604. C. Rogers. 1961. On Becoming a Person: A Therapist’s View of Psychotherapy. Houghton Mifflin Company Sentry Edition. Houghton Mifflin. ISBN 9780395084090. https://books.google.com/books?id=oSqjUrKjrKYC.Google ScholarGoogle Scholar
  605. J. Rong, Y. P. P. Chen, M. Chowdhury, and G. Li. 2007. Acoustic features extraction for emotion recognition. In Proceedings—6th IEEE/ACIS International Conference on Computer and Information Science, ICIS 2007; 1st IEEE/ACIS International Workshop on e-Activity, IWEA 2007, 419–424. ISBN 0769528414. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  606. A. M. Rosenthal-von der Pütten, N. C. Krämer, and J. Herrmann. 2018. The effects of human-like and robot-specific affective nonverbal behavior on perception, emotion, and behavior. Int. J. Soc. Robot. 10, 5, 569–582. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  607. F. D. Rosis, C. Pelachaud, I. Poggi, V. Carofiglio, and B. D. Carolis. 2003. From Greta’s mind to her face: Modelling the dynamics of affective states in a conversational embodied agent. Int. J. Hum.-Comput. Stud. 59, 1, 81–118. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  608. S. Rossi, F. Ferland, and A. Tapus. 2017. User profiling and behavioral adaptation for HRI: A survey. Pattern Recognit. Lett. 99, 3–12. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  609. M. B. Rosson and J. M. Carroll. 2009. Scenario based design. In Human–Computer Interaction. CRC Press, Boca Raton, FL, 145–162.Google ScholarGoogle Scholar
  610. J. Rubin, H. Eldardiry, R. Abreu, S. Ahern, H. Du, A. Pattekar, and D. G. Bobrow. 2015. Towards a mobile and wearable system for predicting panic attacks. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 529–533. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  611. W. Ruch. 1995. Will the real relationship between facial expression and affective experience please stand up: The case of exhilaration. Cogn. Emot. 9, 1, 33–58. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  612. O. Rudovic, M. Pantic, and I. Patras. 2012. Coupled Gaussian processes for pose-invariant facial expression recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35, 6, 1357–1369. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  613. O. Rudovic, J. Lee, M. Dai, B. Schuller, and R. W. Picard. 2018. Personalized machine learning for robot perception of affect and engagement in autism therapy. Sci. Robot. 3, 19. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  614. K. Ruhland, C. E. Peters, S. Andrist, J. B. Badler, N. I. Badler, M. Gleicher, B. Mutlu, and R. McDonnell. 2015. A review of eye gaze in virtual agents, social robotics and HCI: Behaviour generation, user interaction and perception. Comput. Graph. Forum 34, 299–326. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  615. P. Ruskamp. 2016. Your Environment and You: Investigating Stress Triggers and Characteristics of the Built Environment. Ph.D. thesis. Kansas State University, Manhattan, KS. https://krex.k-state.edu/dspace/handle/2097/32592.Google ScholarGoogle Scholar
  616. J. A. Russell. 1980. A circumplex model of affect. J. Pers. Soc. Psychol. 39, 6, 1161–1178. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  617. J. A. Russell. 1991. Culture and the categorization of emotions. Psychol. Bull. 110, 3, 426–450. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  618. S. Russell. 2019. Human Compatible: Artificial Intelligence and the Problem of Control. Penguin.Google ScholarGoogle Scholar
  619. J. A. Russell and A. Mehrabian. 1977. Evidence for a three-factor theory of emotions. J. Res. Pers. 11, 3, 273–294. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  620. J. A. Russell, A. Weiss, and G. A. Mendelsohn. 1989. Affect grid: A single-item scale of pleasure and arousal. J. Pers. Soc. Psychol. 57, 3, 493–502. ISSN: 00223514. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  621. J. A. Russell, J.-A. Bachorowski, and J.-M. Fernández-Dols. 2003. Facial and vocal expressions of emotion. Annu. Rev. Psychol. 54, 1, 329–349. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  622. D. Ruta and B. Gabrys. 2000. An overview of classifier fusion methods. Comput. Inf. Syst. 7, 1, 1–10. ISSN: 1352-9404.Google ScholarGoogle Scholar
  623. M. D. Rutherford, S. Baron-Cohen, and S. Wheelwright. 2002. Reading the mind in the voice: A study with normal adults and adults with Asperger syndrome and high functioning autism. J. Autism Dev. Disord. 32, 3, 189–194. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  624. S. Saeb, E. G. Lattie, S. M. Schueller, K. P. Kording, and D. C. Mohr. September. 2016. The relationship between mobile phone location sensor data and depressive symptom severity. PeerJ 4, e2537. ISSN: 2167-8359. https://peerj.com/articles/2537. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  625. I. Sakellariou, P. Kefalas, S. Savvidou, I. Stamatopoulou, and M. Ntika. 2016. The role of emotions, mood, personality and contagion in multi-agent system decision making. In IFIP International Conference on Artificial Intelligence Applications and Innovations. Springer, 359–370. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  626. T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever. 2017. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864.Google ScholarGoogle Scholar
  627. S. Salmeron-Majadas, O. Santos, and J. Boticario. 2014. Exploring indicators from keyboard and mouse interactions to predict the user affective state. In 7th International Conferenceon Educational Data Mining. 365–366. https://www.educationaldatamining.org/EDM2014/uploads/procs2014/posters/41_EDM-2014-Poster.pdf.Google ScholarGoogle Scholar
  628. P. Samarati. 2001. Protecting respondents identities in microdata release. IEEE Trans. Knowl. Data Eng. 13, 6, 1010–1027. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  629. A. Sano and R. W. Picard. 2013. Stress recognition using wearable sensors and mobile phones. In 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction. IEEE, Geneva, Switzerland, 671–676. ISBN 978-0-7695-5048-0. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  630. A. Sano, P. Johns, and M. Czerwinski. 2015a. Healthaware: An advice system for stress, sleep, diet and exercise. In 2015 6th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, ACII. Google ScholarGoogle ScholarDigital LibraryDigital Library
  631. A. Sano, A. J. Phillips, A. Z. Yu, A. W. McHill, S. Taylor, N. Jaques, C. A. Czeisler, E. B. Klerman, and R. W. Picard. 2015b. Recognizing academic performance, sleep quality, stress level, and mental health using personality traits, wearable sensors and mobile phones. 2015 IEEE 12th International Conference on Wearable and Implantable Body Sensor Networks (BSN). 1–6. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  632. A. Sano, A. Z. Yu, A. W. McHill, A. J. Phillips, S. Taylor, N. Jaques, E. B. Klerman, and R. Picard. 2015c. Prediction of happy–sad mood from daily behaviors and previous sleep history. In In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  633. A. Sano, P. Johns, and M. Czerwinski. 2017a. Designing opportune stress intervention delivery timing using multi-modal data. In 2017 7th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  634. A. Sano, A. Phillips, A. McHill, S. Taylor, L. Barger, C. Czeisler, and R. Picard. 2017b. Influence of weekly sleep regularity on self-reported wellbeing. Sleep 40, A67–A68. ISSN: 0161-8105. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  635. A. Sano, S. Taylor, A. W. McHill, A. J. Phillips, L. K. Barger, E. Klerman, and R. Picard. 2018. Identifying objective physiological markers and modifiable behaviors for self-reported stress and mental health status using wearable sensors and mobile phones: Observational study. J. Med. Internet Res. ISSN: 14388871. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  636. J. M. Saragih, S. Lucey, and J. F. Cohn. 2011. Deformable model fitting by regularized landmark mean-shift. Int. J. Comput. Vis. 91, 200–215. ISSN: 09205691. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  637. T. R. Sarbin. 2014. Emotions as situated actions. In Emotions in Ideal Human Development. Psychology Press, 89–112.Google ScholarGoogle Scholar
  638. C. Sarkar, S. Bhatia, A. Agarwal, and J. Li. 2014. Feature analysis for computational personality recognition using YouTube personality data set. In Proceedings of the 2014 ACM Multi Media on Workshop on Computational Personality Recognition. ACM, 11–14. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  639. D. A. Sauter and A. H. Fischer. April. 2018. Can perceivers recognise emotions from spontaneous expressions? Cogn. Emot. 32, 3, 504–515. ISSN: 0269-9931. https://www.tandfonline.com/doi/full/10.1080/02699931.2017.1320978. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  640. S. Schachter and J. E. Singer. 1962. Cognitive, social, and physiological determinants of emotional state. Psychol. Rev. 69, 5, 379–399. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  641. K. R. Scherer. 1986. Vocal affect expression: A review and a model for future research. Psychol. Bull. 99, 2, 143–165. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  642. K. R. Scherer. 2003. Vocal communication of emotion: A review of research paradigms. Speech Commun. 40, 1–2, 227–256. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  643. K. R. Scherer, T. Bänziger, and E. Roesch. 2010. A Blueprint for Affective Computing: A Sourcebook and Manual. Oxford University Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  644. K. R. Scherer, A. Schorr, and T. Johnstone (Eds.). 2001. Appraisal Processes in Emotion: Theory, Methods, Research. Oxford University Press.Google ScholarGoogle Scholar
  645. A. Schirmer and R. Adolphs. 2017. Emotion perception from face, voice, and touch: Comparisons and convergence. Trends Cogn. Sci. 21, 3, 216–228. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  646. P. Schmidt, A. Reiss, R. Duerichen, C. Marberger, and K. van Laerhoven. 2018. Introducing WESAD, a multimodal dataset for wearable stress and affect detection. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. 400–408. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  647. M. A. Schmuckler. 2001. What is ecological validity? A dimensional analysis. Infancy 2, 4, 419–436. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  648. A. N. Schore. 2015. Affect Regulation and the Origin of the Self: The Neurobiology of Emotional Development. Routledge.Google ScholarGoogle Scholar
  649. D. Schuler and A. Namioka. 1993. Participatory Design: Principles and Practices. CRC Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  650. D. Schuller and B. W. Schuller. 2018. The age of artificial emotional intelligence. Computer. 51, 9, 38–46. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  651. B. Schuller, G. Rigoll, and M. Lang. 2003. Hidden Markov model-based speech emotion recognition. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing—Proceedings. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  652. B. Schuller, A. Batliner, S. Steidl, and D. Seppi. November. 2011. Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge. Speech Commun. 53, 9–10, 1062–1087. ISSN: 01676393. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  653. B. W. Schuller, A. Batliner, C. Bergler, E.-M. Messner, A. Hamilton, S. Amiriparian, A. Baird, G. Rizos, M. Schmitt, L. Stappen, H. Baumeister, A. D. MacIntyre, and S. Hantke. 2020. The INTERSPEECH 2020 Computational Paralinguistics Challenge: Elderly emotion, breathing & masks. Proceedings INTERSPEECH. ISCA, Shanghai, China, 2042–2046. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  654. N. Sebe, I. Cohen, and T. S. Huang. 2005. Multimodal emotion recognition. In Handbook of Pattern Recognition and Computer Vision. World Scientific. https://www.worldscientific.com/doi/abs/10.1142/9789812775320_0021.Google ScholarGoogle Scholar
  655. N. Sebe, I. Cohen, T. Gevers, and T. S. Huang. 2006. Emotion recognition based on joint visual and audio cues. In 18th International Conference on Pattern Recognition (ICPR’06), Vol. 1. IEEE, 1136–1139. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  656. E. Sedenberg, J. Chuang, and D. Mulligan. 2016. Designing commercial therapeutic robots for privacy preserving systems and ethical research practices within the home. Int. J. Soc. Robot. 8, 4, 575–587. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  657. P. Sequeira, F. S. Melo, and A. Paiva. 2011. Emotion-based intrinsic motivation for reinforcement learning agents. In S. D’Mello, A. Graesser, B. Schuller, and J.-C. Martin (Eds.), Affective Computing and Intelligent Interaction. CII 2011. Lecture Notes in Computer Science, Vol. 6974. Springer, Berlin, 326–336. ISBN: 978-3-642-24600-5. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  658. M. Shah, B. Mears, C. Chakrabarti, and A. Spanias. 2012. Lifelogging: Archival and retrieval of continuously recorded audio using wearable devices. In 2012 IEEE International Conference on Emerging Signal Processing Applications. 99–102. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  659. G. Sharma and A. Dhall. 2021. A survey on automatic multimodal emotion recognition in the wild. In G. Phillips-Wren, A. Esposito, and L. C. Jain (Eds.), Advances in Data Science: Methodologies and Applications. Intelligent Systems Reference Library, Vol. 189. Springer, Cham, 35–64. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  660. P. E. Shrout and J. L. Rodgers. 2018. Psychology, science, and knowledge construction: Broadening perspectives from the replication crisis. Annu. Rev. Psychol. 69, 487–510. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  661. L. Shu, J. Xie, M. Yang, Z. Li, Z. Li, D. Liao, X. Xu, and X. Yang. June. 2018. A review of emotion recognition using physiological signals. Sensors (Basel). 18, 7, 2074. ISSN: 1424-8220. http://www.mdpi.com/1424-8220/18/7/2074. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  662. S. Siddharth, T.-P. Jung, and T. J. Sejnowski. 2018. Multi-modal approach for affective computing. In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 291–294. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  663. K. D. Sidney, S. D. Craig, B. Gholson, S. Franklin, R. Picard, and A. C. Graesser. 2005. Integrating affect sensors in an intelligent tutoring system. In Affective Interactions: The Computer in the Affective Loop Workshop at 2005. AMC Press, 7–13.Google ScholarGoogle Scholar
  664. E. Siedlecka and T. F. Denson. January. 2019. Experimental methods for inducing basic emotions: A qualitative review. Emot. Rev. 11, 1, 87–97. ISSN: 1754-0739. http://journals.sagepub.com/doi/10.1177/1754073917749016. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  665. E. H. Siegel, M. K. Sands, W. van den Noortgate, P. Condon, Y. Chang, J. Dy, K. S. Quigley, and L. F. Barrett. 2018. Emotion fingerprints or emotion populations? A meta-analytic investigation of autonomic features of emotion categories. Psychol. Bull. 144, 4, 343. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  666. I. Siegert, R. Böck, and A. Wendemuth. 2014. Inter-rater reliability for emotion annotation in human–computer interaction: Comparison and methodological improvements. J. Multimodal User In. 8, 1, 17–28. ISSN: 17838738. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  667. K. Sikka, K. Dykstra, S. Sathyanarayana, G. Littlewort, and M. Bartlett. 2013. Multiple kernel learning for emotion recognition in the wild. In ICMI 2013—Proceedings of the 2013 15th ACM International Conference on Multimodal Interaction. 517–524. ISBN: 9781450321297. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  668. H. A. Simon. 1967. Motivational and emotional controls of cognition. Psychol. Rev. 74, 1, 29–39. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  669. K. Simonyan and A. Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.Google ScholarGoogle Scholar
  670. S. Singh, A. G. Barto, and N. Chentanez. 2004. Intrinsically motivated reinforcement learning. In Proceedings of the 17th International Conference on Neural Information Processing Systems, NIPS’04. MIT Press, Cambridge, 1281–1288. Google ScholarGoogle ScholarDigital LibraryDigital Library
  671. D. Singh, I. Psychoula, E. Merdivan, J. Kropf, S. Hanke, E. Sandner, L. Chen, and A. Holzinger. 2020. Privacy-enabled smart home framework with voice assistant. In F. Chen, R. García-Betances, L. Chen, M. Cabrera-Umpiérrez, and C. Nugent (Eds.), Smart Assisted Living. Computer Communications and Networks. Springer, Cham, 321–339. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  672. V. Sivaraman, H. H. Gharakheili, A. Vishwanath, R. Boreli, and O. Mehani. 2015. Network-level security and privacy control for smart-home IoT devices. In 2015 IEEE 11th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob). IEEE, 163–167. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  673. B. F. Skinner. 1938. The Behaviour of Organisms: An Experimental Analysis. Appleton-Century.Google ScholarGoogle Scholar
  674. B. F. Skinner. 1966. Operant behavior. In W. K. Honig (Ed.), Operant behavior areas of research and application. Appleton-Century-Crofts, New York.Google ScholarGoogle Scholar
  675. M. Skowron, M. Theunis, S. Rank, and A. Kappas. 2013. Affect and social processes in online communication—Experiments with an affective dialog system. IEEE Trans. Affect. Comput. 4, 3, 267–279. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  676. P. Slovic, E. Peters, M. L. Finucane, and D. G. MacGregor. 2005. Affect, risk, and decision making. Health Psychol. 24, 4S, S35–S40. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  677. R. Smiljanic and R. C. Gilbert. 2017. Acoustics of clear and noise-adapted speech in children, young, and older adults. J. Speech Lang. Hear. Res. 60, 11, 3081–3096. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  678. C. A. Smith and P. C. Ellsworth. 1985. Patterns of cognitive appraisal in emotion. J. Pers. Soc. Psychol. 48, 4, 813–838.Google ScholarGoogle ScholarCross RefCross Ref
  679. C. A. Smith and L. D. Kirby. 2011. The role of appraisal and emotion in coping and adaptation. In The Handbook of Stress Science: Biology, Psychology, and Health. Springer, 195.Google ScholarGoogle Scholar
  680. C. A. Smith and R. S. Lazarus. 1990. Emotion and adaptation. In L. A. Pervin (Ed.), Handbook of Personality: Theory and Research. Guilford, New York, 609–637.Google ScholarGoogle Scholar
  681. I. Sneddon, M. McRorie, G. McKeown, and J. Hanratty. 2012. The Belfast induced natural emotion database. IEEE Trans. Pattern Anal. Mach. Intell. 3, 1, 32–41. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  682. D. K. Snyder, R. E. Heyman, and S. N. Haynes. 2005. Evidence-based approaches to assessing couple distress. Psychol. Assess. 17, 3, 288–307. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  683. M. Soleymani, M. Pantic, and T. Pun. 2011. Multimodal emotion recognition in response to videos. IEEE Trans. Affect. Comput. 3, 1, 211–223. DOI: . https://ieeexplore.ieee.org/abstract/document/6095505/.Google ScholarGoogle ScholarDigital LibraryDigital Library
  684. M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic. 2012. A multimodal database for affect recognition and implicit tagging. IEEE Trans. Affect. Comput. 3, 1, 42–55. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  685. M. Soleymani, S. Asghari-Esfeden, Y. Fu, and M. Pantic. 2015. Analysis of EEG signals and facial expressions for continuous emotion detection. IEEE Trans. Affect. Comput. 7, 1, 17–28. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  686. S. Song and S. Yamada. 2017. Expressing emotions through color, sound, and vibration with an appearance-constrained social robot. In 2017 12th ACM/IEEE International Conference on Human–Robot Interaction (HRI). IEEE, 2–11. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  687. J. Speck. 2013. Walkable City: How Downtown Can Save America, One Step at a Time. Macmillan, New York, NY. ISBN: 978-0-374-28581.Google ScholarGoogle Scholar
  688. M. Spezialetti, G. Placidi, and S. Rossi. 2020. Emotion recognition for human–robot interaction: Recent advances and future perspectives. Front. Robot. AI. 7, 532279. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  689. C. D. Spielberger, R. L. Gorsuch, and R. E. Lushene. 1970. STAI manual for the state-trait anxiety inventory. Self-Evaluation Questionnaire. Consulting Psychologist Press, CA. https://psycnet.apa.org/doi/10.1037/t06496-000Google ScholarGoogle Scholar
  690. R. L. Spitzer, K. Kroenke, J. B. Williams, and B. Löwe. 2006. A brief measure for assessing generalized anxiety disorder: The GAD-7. Arch. Intern. Med. 166, 1092–1097. ISSN: 00039926. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  691. R. Srinivasan and A. M. Martinez. 2018. Cross-cultural and cultural-specific production and perception of facial expressions of emotion in the wild. IEEE Trans. Affect. Comput. 12, 3, 707–721. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  692. R. A. Stevenson and T. W. James. February 2008. Affective auditory stimuli: Characterization of the International Affective Digitized Sounds (IADS) by discrete emotional categories. Behav. Res. Methods 40, 1, 315–321. ISSN: 1554-351X. DOI: . http://link.springer.com/10.3758/BRM.40.1.315.Google ScholarGoogle ScholarCross RefCross Ref
  693. R. A. Stevenson, J. A. Mikels, and T. W. James. 2007. Characterization of the affective norms for English words by discrete emotional categories. Behav. Res. Methods 39, 4, 1020–1024. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  694. J. R. Stroop. 1935. Studies of interference in serial verbal reactions. J. Exp. Psychol. 18, 6, 643–662. ISSN: 00221015. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  695. S. Sumartojo, D. Lugli, D. Kulić, L. Tian, P. Carreno-Medrano, M. Mintrom, and A. Allen, August. 2020. Robotic Logics of Public Space in the Covid Pandemic. Retrieved Feb 1, 2021, from https://www.mediapolisjournal.com/2020/08/robotic-logics-of-public-space/.Google ScholarGoogle Scholar
  696. L. W. Sumner. 1996. Welfare, Happiness, and Ethics. Clarendon Press.Google ScholarGoogle Scholar
  697. B. Sun, L. Li, T. Zuo, Y. Chen, G. Zhou, and X. Wu. 2014a. Combining multimodal features with hierarchical classifier fusion for emotion recognition in the wild. In ICMI 2014—Proceedings of the 2014 International Conference on Multimodal Interaction. 481–486. ISBN: 9781450328852. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  698. D. Sun, P. Paredes, and J. Canny. 2014b. MouStress: Detecting stress from mouse motion. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’14. Association for Computing Machinery, New York, NY, 61–70. ISBN: 9781450324731. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  699. R. Sutton and A. Barto. 2018. Reinforcement Learning: An Introduction (2nd. ed.). Adaptive Computation and Machine Learning. The MIT Press, Cambridge, MA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  700. R. E. Sutton and K. F. Wheatley. 2003. Teachers’ emotions and teaching: A review of the literature and directions for future research. Educ. Psychol. Rev. 15, 4, 327–358. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  701. M. Swain, A. Routray, and P. Kabisatpathy. 2018. Databases, features and classifiers for speech emotion recognition: A review. Int. J. Speech Technol. 21, 1, 93–120. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  702. T. Tamura, Y. Maeda, M. Sekine, and M. Yoshida. April. 2014. Wearable photoplethysmographic sensors—Past and present. Electronics 3, 2, 282–302. ISSN: 2079-9292. http://www.mdpi.com/2079-9292/3/2/282. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  703. A. Tanaka, A. Koizumi, H. Imai, S. Hiramatsu, E. Hiramoto, and B. de Gelder. 2010. I feel your voice: Cultural differences in the multisensory perception of emotion. Psychol. Sci. 21, 9, 1259–1262. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  704. E. M. Tapia, S. S. Intille, and K. Larson. 2004. Activity recognition in the home using simple and ubiquitous sensors. In A. Ferscha and F. Mattern (Eds.), Pervasive Computing. Pervasive 2004. Lecture Notes in Computer Science, Vol. 3001. Springer, Berlin, 158–175. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  705. C. Tappolet. 2016. Emotions, Value, and Agency. Oxford University Press.Google ScholarGoogle Scholar
  706. Y. R. Tausczik and J. W. Pennebaker. 2010. The psychological meaning of words: LIWC and computerized text analysis methods. J. Lang. Soc. Psychol. 29, 1, 24–54. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  707. S. Taylor, N. Jaques, W. Chen, S. Fedor, A. Sano, and R. Picard. 2015. Automatic identification of artifacts in electrodermal activity data. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. ISBN: 9781424492718. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  708. S. Taylor, N. Jaques, E. Nosakhare, A. Sano, and R. Picard. 2017. Personalized multitask learning for predicting tomorrow’s mood, stress, and health. IEEE Trans. Affect. Comput. 11, 2, 200–213. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  709. T. ter Bogt, N. Canale, M. Lenzi, A. Vieno, and R. van den Eijnden. 2021. Sad music depresses sad adolescents: A listener’s profile. Psychol. Music 49, 2, 1–16. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  710. A. Thieme, D. Belgrave, and G. Doherty. August. 2020. Machine learning in mental health: A systematic review of the HCI literature to support the development of effective and implementable ML systems. ACM Trans. Comput. Hum. Interact. 27, 5, 1–5. ISSN: 1073-0516. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  711. P. A. Thoits. 2004. Emotion norms, emotion work, and social order. In A. S. R. Manstead, N. Frijda, and A. Fischer (Eds.), Feelings and Emotions: The Amsterdam Symposium. Cambridge University Press, New York, NY, 359–378. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  712. A. Thomaz, G. Hoffman, and M. Cakmak. 2016. Computational human–robot interaction. Found. Trends Robot. 4, 2, 105–223. ISSN: 1935-8253. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  713. E. L. Thorndike. 1898. Animal intelligence: An experimental study of the associative processes in animals. Psychol. Rev. Monogr. Suppl. 2, 4, i–109. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  714. E. L. Thorndike. 1911. Animal Intelligence: Experimental Studies. Macmillan Press, New York. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  715. S. Thrun and T. M. Mitchell. 1995. Lifelong robot learning. Rob. Auton. Syst. 15, 1–2, 25–46. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  716. S. Thunberg and T. Ziemke. 2020. Are people ready for social robots in public spaces? In IEEE International Conference on Human–Robot Interaction, HRI’20. ACM, 482–484. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  717. L. Tian and S. Oviatt. 2021. A taxonomy of social errors in human–robot interaction. ACM Trans. Hum. Robot Interact. 10, 2, 1–32. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  718. Y.-I. Tian, T. Kanade, and J. F. Cohn. 2001. Recognizing action units for facial expression analysis. IEEE Trans. Pattern Anal. Mach. Intell. 23, 2, 97–115. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  719. Y. Tian, T. Kanade, and J. Cohn. 2005. Facial expression analysis. In Handbook of Face Recognition. Springer, New York, NY, 247–275. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  720. L. Tian, J. Moore, and C. Lai. 2016. Recognizing emotions in spoken dialogue with hierarchically fused acoustic and lexical features. In D. Hakkani-Tur, J. Hirschberg, D. Reynolds, F. Seide, Z. Hua Tan, and D. Povey (Eds.), 2016 IEEE Spoken Language Technology Workshop (SLT). IEEE, 565–572. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  721. L. Tian, M. Muszynski, C. Lai, J. D. Moore, T. Kostoulas, P. Lombardo, T. Pun, and G. Chanel. 2017. Recognizing induced emotions of movie audiences: Are induced and perceived emotions the same? In Affective Computing and Intelligent Interaction (ACII), 2017 Seventh International Conference on. IEEE, 28–35. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  722. L. Tian, P. Carreno-Medrano, S. Sumartojo, M. Mintrom, E. Coronado, G. Venture, and D. Kulić. 2020. User expectations of robots in public spaces: A co-design methodology. In A. R. Wagner, et al. (Eds.), International Conference on Social Robotics. ICSR 2020. Lecture Notes in Computer Science, Vol. 12483. Springer, Cham, 259–270. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  723. L. Tian, P. Carreno-Medrano, A. Allen, S. Sumartojo, M. Mintrom, E. Coronado, G. Venture, E. Croft, and D. Kulić. 2021. Redesigning human–robot interaction in response to robot failures: A participatory design methodology. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–8. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  724. R. Tourangeau and P. C. Ellsworth. 1979. The role of facial response in the experience of emotion. J. Pers. Soc. Psychol. 37, 9, 1519–1531. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  725. D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. 2015. Learning spatiotemporal features with 3D convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, 4489–4497. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  726. C. Tsiourti, A. Weiss, K. Wac, and M. Vincze. 2019. Multimodal integration of emotional signals from voice, body, and context: Effects of (in)congruence on emotion recognition and attitudes towards robots. Int. J. Soc. Robot. 11, 4, 555–573. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  727. J. Tu, G. Yu, J. Wang, C. Domeniconi, and X. Zhang. 2020. Attention-aware answers of the crowd. In Proceedings of the 2020 SIAM International Conference on Data Mining. SIAM, 451–459.Google ScholarGoogle Scholar
  728. M. Turk, 2014. Multimodal interaction: A review. Pattern Recognit. Lett. 36, 189–195. ISSN: 01678655. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  729. J. M. Tybur, D. Lieberman, R. Kurzban, and P. DeScioli. 2013. Disgust: Evolved function and structure. Psychol. Rev. 120, 1, 65–84. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  730. P. Tzirakis, G. Trigeorgis, M. A. Nicolaou, B. W. Schuller, and S. Zafeiriou. 2017. End-to-end multimodal emotion recognition using deep neural networks. IEEE J. Sel. Top. Signal Process. 11, 8, 1301–1309. https://ieeexplore.ieee.org/abstract/document/8070966/. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  731. T. Umematsu, A. Sano, and R. Picard. 2019a. Daytime data and LSTM can forecast tomorrow’s stress, health, and happiness. In IEEE Engineering, Medicine and Biology Conference. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  732. T. Umematsu, A. Sano, S. Taylor, and R. Picard. 2019b. Improving students’ daily life stress forecasting using LSTM neural networks. In IEEE Biomedical and Health Informatics. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  733. UN General Assembly. 1949. Universal Declaration of Human Rights, Vol. 3381. Department of State, United States of America.Google ScholarGoogle Scholar
  734. United Nations. 2020. COVID-19 and human rights: We are all in this together. Retrieved March 4, 2021, from https://unsdg.un.org/resources/covid-19-and-human-rights-we-are-all-together.Google ScholarGoogle Scholar
  735. G. Valenza, A. Lanata, and E. P. Scilingo. 2012. The role of nonlinear dynamics in affective valence and arousal recognition. IEEE Trans. Affect. Comput. 3, 2, 237–249. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  736. G. Valenza, L. Citi, A. Lanata, E. P. Scilingo, and R. Barbieri. 2014. Revealing real-time emotional responses: A personalized assessment based on heartbeat dynamics. Sci. Rep. 4, 1, 1–13. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  737. J. Vallverdú. 2009. Handbook of Research on Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence. IGI Global. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  738. M. Valstar. 2019. Multimodal databases. In The Handbook of Multimodal–Multisensor Interfaces: Language Processing, Software, Commercialization, and Emerging Directions, Vol. 3. Association for Computing Machinery and Morgan & Claypool, 393–421. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  739. B. van Rijn, M. Cooper, A. Jackson, and C. Wild. 2017. Avatar-based therapy within prison settings: Pilot evaluation. Br. J. Guid. Counc. 45, 3, 268–283. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  740. A. E. van’t Veer and R. Giner-Sorolla. 2016. Pre-registration in social psychology—A discussion and suggested template. J. Exp. Soc. Psychol. 67, 2–12. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  741. E. Vasey, S. Ko, and M. Jeon. 2018. In-vehicle affect detection system: Identification of emotional arousal by monitoring the driver and driving style. In Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI’18. Association for Computing Machinery, New York, NY, 243–247. ISBN: 9781450359474. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  742. D. Vasquez, B. Okal, and K. O. Arras. 2014. Inverse reinforcement learning algorithms and features for robot navigation in crowds: An experimental comparison. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 1341–1346. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  743. G. K. Verma and U. S. Tiwary. 2014. Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals. NeuroImage, 102, Pt 1, 162–172. https://www.sciencedirect.com/science/article/pii/S1053811913010999. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  744. V. S. Verykios. 2013. Association rule hiding methods. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 3, 1, 28–36. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  745. G. Vigliocco, L. Meteyard, M. Andrews, and S. Kousta. 2009. Toward a theory of semantic representation. Lang. Cogn. 1, 2, 219–247. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  746. A. Vinciarelli, M. Pantic, D. Heylen, C. Pelachaud, I. Poggi, F. D’Errico, and M. Schroeder. 2012. Bridging the gap between social animal and unsocial machine: A survey of social signal processing. IEEE Trans. Affect. Comput. 3, 1, 69–87. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  747. P. Viola and M. J. Jones. 2004. Robust real-time face detection. Int. J. Comput. Vis. 57, 137–154. ISSN: 09205691. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  748. P. Voigt and A. Von dem Bussche. 2017. The EU General Data Protection Regulation (GDPR). A Practical Guide (1st. ed.). Springer International Publishing, Cham. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  749. B. J. Walker. 2018. Smart Grid System Report 2018. Report to congress, United States Department of Energy.Google ScholarGoogle Scholar
  750. H. G. Wallbott and K. R. Scherer. 1986. Cues and channels in emotion recognition. J. Pers. Soc. Psychol. 51, 4, 690–699. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  751. M. Wand and T. Schultz. 2014. Pattern learning with deep neural networks in EMG-based speech recognition. In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 4200–4203. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  752. D. Wang and Y. Shang. 2013. Modeling physiological data with deep belief networks. Int. J. Inf. Educ. Technol. 3, 5, 505–511. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  753. R. Wang, F. Chen, Z. Chen, T. Li, G. Harari, S. Tignor, X. Zhou, D. Ben-Zeev, and A. T. Campbell. 2014a. StudentLife: Assessing mental health, academic performance and behavioral trends of college students using smartphones. In UbiComp 2014—Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ISBN: 9781450329682. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  754. W. Wang, G. Athanasopoulos, S. Yilmazyildiz, G. Patsis, V. Enescu, H. Sahli, W. Verhelst, A. Hiolle, M. Lewis, and L. Canamero. September. 2014b. Natural emotion elicitation for emotion modeling in child–robot interactions. In WOCCI, 51–56. https://www.isca-speech.org/archive_v0/wocci_2014/papers/wc14_051.pdf.Google ScholarGoogle Scholar
  755. S. Wang, S. O. Lilienfeld, and P. Rochat. 2015. The uncanny valley: Existence and explanations. Rev. Gen. Psychol. 19, 4, 393–407. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  756. R. Wang, W. Wang, A. daSilva, J. F. Huckins, W. M. Kelley, T. F. Heatherton, and A. T. Campbell. March. 2018. Tracking depression dynamics in college students using mobile phone and wearable sensing. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 1, 1–26. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  757. R. Wang, Y. Yuan, Y. Liu, J. Zhang, P. Liu, Y. Lu, and Y. Yao. Sept. 2019. Using street view data and machine learning to assess how perception of neighborhood safety influences urban residents’ mental health. Health Place. 59, 102186. ISSN: 1353-8292. http://www.sciencedirect.com/science/article/pii/S1353829219304526. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  758. D. Watson and L. A. Clark. 1999. The PANAS-X: Manual for the positive and negative affect schedule—Expanded form. University of Iowa. https://www2.psychology.uiowa.edu/faculty/clark/panas-x.pdf.Google ScholarGoogle Scholar
  759. D. Watson, L. A. Clark, and A. Tellegen. 1988. Development and validation of brief measures of positive and negative affect: The PANAS scales. J. Pers. Soc. Psychol. 54, 6, 1063–1070. ISSN: 0022-3514. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  760. D. F. Watt. 2018. Psychotherapy in an age of neuroscience: Bridges to affective neuroscience. In Revolutionary Connections. Routledge, 79–115.Google ScholarGoogle Scholar
  761. G. Weinberg, M. Bretan, G. Hoffman, and S. Driscoll. 2020. Robotic Musicianship: Embodied Artificial Creativity and Mechatronic Musical Expression, Vol. 8. Springer Nature.Google ScholarGoogle ScholarCross RefCross Ref
  762. K. K. Weisel, L. M. Fuhrmann, M. Berking, H. Baumeister, P. Cuijpers, and D. D. Ebert. 2019. Standalone smartphone apps for mental health—A systematic review and meta-analysis. NPJ Digit. Med. 2, 118. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  763. A. Weiss, R. Bernhaupt, M. Lankes, and M. Tscheligi. 2009. The USUS evaluation framework for human–robot interaction. In AISB2009: Proceedings of the Symposium on New Frontiers in Human–Robot Interaction, Vol. 4. 11–26.Google ScholarGoogle Scholar
  764. J. Weizenbaum. 1972. How does one insult a machine? Science 176, 609–614. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  765. J. Weizenbaum. 1976. Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman and Company. Google ScholarGoogle ScholarDigital LibraryDigital Library
  766. Z. Wen and T. S. Huang. 2003. Capturing subtle facial motions in 3D face tracking. In Proceedings Ninth IEEE International Conference on Computer Vision. IEEE, 1343–1350. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  767. A. F. Westin. 1968. Privacy and freedom. Wash. Lee Law Rev. 25, 1, 166.Google ScholarGoogle Scholar
  768. C. M. Whissell. 1989. The dictionary of affect in language. In The Measurement of Emotions. Elsevier, 113–131. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  769. T. Whitaker. 2018. Linking Affect and the Built Environment Using Mobile Sensors and Geospatial Analysis. Ph.D. thesis. Kansas State University, Manhattan, KS. https://krex.k-state.edu/dspace/handle/2097/38895.Google ScholarGoogle Scholar
  770. E. White. 1983. Site Analysis: Diagramming Information for Architectural Design. Architectural Media. ISBN: 978-1928643043. https://books.google.com/books?id=oV4WRAAACAAJ.Google ScholarGoogle Scholar
  771. S. Whitehead, J. Karlsson, and J. Tenenberg. 1993. Learning multiple goal behavior via task decomposition and dynamic policy merging. In J. H. Connell and S. Mahadevan (Eds.), Robot Learning. The Springer International Series in Engineering and Computer Science (Knowledge Representation, Learning and Expert Systems), Vol. 233. Springer, Boston, MA, 45–78. ISBN: 978-1-4615-3184-5. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  772. S. C. Widen and J. A. Russell. 2008. Children acquire emotion categories gradually. Cogn. Dev. 23, 2, 291–312. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  773. J. H. G. Williams, C. F. Huggins, B. Zupan, M. Willis, T. E. van Rheenen, W. Sato, R. Palermo, C. Ortner, M. Krippl, M. Kret, J. M. Dickson, C. R. Li, and L. Lowe. 2020. A sensorimotor control framework for understanding emotional communication and regulation. Neurosci. Biobehav. Rev. 112, 503–518. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  774. K. Winkle, P. Caleb-Solly, A. Turton, and P. Bremner. 2019. Mutual shaping in the design of socially assistive robots: A case study on social robots for therapy. Int. J. Soc. Robot. 1–20. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  775. K. L. Wolf. 2005. Business district streetscapes, trees, and consumer response. J. For. 103, 8, 396–400.Google ScholarGoogle Scholar
  776. K. Wolf, A. Schmidt, A. Bexheti, and M. Langheinrich. 2014. Lifelogging: You’re wearing a camera? IEEE Pervasive Comput. 13, 3, 8–12. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  777. M. Wöllmer, F. Eyben, S. Reiter, B. Schuller, C. Cox, E. Douglas-Cowie, and R. Cowie. 2008. Abandoning emotion classes—Towards continuous emotion recognition with modelling of long-range dependencies. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  778. R. S. Woodworth and H. Schlosberg. 1938. Experimental Psychology. Henry Holt and Company, New York.Google ScholarGoogle Scholar
  779. World Health Organization. 2019a. Mental Disorders. Fact sheet, World Health Organization. Retrieved March 4, 2021, from https://www.who.int/news-room/fact-sheets/detail/mental-disorders.Google ScholarGoogle Scholar
  780. World Health Organization. 2019b. Mental Health in the Workplace. Information sheet, World Health Organization. Retrieved March 4, 2021, from https://www.who.int/mental_health/in_the_workplace/en/.Google ScholarGoogle Scholar
  781. C. H. Wu, Z. J. Chuang, and Y. C. Lin. June. 2006. Emotion recognition from text using semantic labels and separable mixture models. ACM Trans. Asian Lang. Inf. Process. 5, 2, 165–182. ISSN: 15300226. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  782. J. Wu, Z. Lin, and H. Zha. 2015. Multiple models fusion for emotion recognition in the wild. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. 475–481. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  783. X. Xiao and Y. Tao. 2006. Personalized privacy preservation. In Proceedings of the 2006 ACM SIGMOD International Conference on Management of Data. 229–240. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  784. B. Xiao, Z. E. Imel, P. Georgiou, D. C. Atkins, and S. S. Narayanan. 2016. Computational analysis and simulation of empathic behaviors: A survey of empathy modeling with behavioral signal processing framework. Curr. Psychiatry Rep. 18, 5, 49. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  785. P. Xie, M. Bilenko, T. Finley, R. Gilad-Bachrach, K. Lauter, and M. Naehrig. 2014. Crypto-nets: Neural networks over encrypted data. arXiv preprint arXiv:1412.6181.Google ScholarGoogle Scholar
  786. C. Xu, S. Cetintas, K. Lee, and L. Li. 2014. Visual sentiment prediction with deep convolutional neural networks. arXiv preprint arXiv:1411.5731.Google ScholarGoogle Scholar
  787. Y. Xu, I. Hübener, A.-K. Seipp, S. Ohly, and K. David. 2017. From the lab to the real-world: An investigation on the influence of human movement on emotion recognition using physiological signals. In 2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). IEEE, 345–350. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  788. T. Xu, J. White, S. Kalkan, and H. Gunes. 2020. Investigating bias and fairness in facial expression recognition. In A. Bartoli and A. Fusiello (Eds.), Computer Vision—ECCV 2020 Workshops. Springer, 506–523. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  789. M. Yadav, T. Chaspari, J. Kim, and C. R. Ahn. 2018. Capturing and quantifying emotional distress in the built environment. In Proceedings of the Workshop on Human–Habitat for Health (H3): Human–Habitat Multimodal Interaction for Promoting Health and Well-Being in the Internet of Things Era, H3’18. Event-place: Boulder, Colorado. ACM, New York, NY, 9:1–9:8. ISBN: 978-1-4503-6075-3. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  790. E. Yadegaridehkordi, N. F. B. M. Noor, M. N. B. Ayub, H. B. Affal, and N. B. Hussin. 2019. Affective computing in education: A systematic review and future research. Comput. Educ. 142, 103649. ISSN: 0360-1315. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  791. T. Yanaru. 1995. An emotion processing system based on fuzzy inference and subjective observations. In Proceedings—1995 2nd New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems, ANNES 1995. ISBN: 0818671742. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  792. C.-C. Yang and Y.-L. Hsu. 2012. Remote monitoring and assessment of daily activities in the home environment. J. Clin. Gerontol. Geriatr. 3, 3, 97–104. ISSN: 2210-8335. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  793. G. N. Yannakakis. 2018. Enhancing health care via affective computing. Malta J. Health Sci. 5, 1, 38–42. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  794. G. N. Yannakakis, R. Cowie, and C. Busso. 2017. The ordinal nature of emotions. In Int. Conference on Affective Computing and Intelligent Interaction. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  795. Y. Yao, Z. Liang, Z. Yuan, P. Liu, Y. Bie, J. Zhang, R. Wang, J. Wang, and Q. Guan. December. 2019. A human–machine adversarial scoring framework for urban perception assessment using street-view images. Int. J. Geogr. Inf. Sci. 33, 12, 2363–2384. ISSN: 1365-8816. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  796. H. Yates. 2018. Affective Intelligence in Built Environments. Ph.D. thesis. Kansas State University, Manhattan, KS. https://krex.k-state.edu/dspace/handle/2097/38790.Google ScholarGoogle Scholar
  797. H. Yates, B. Chamberlain, and W. H. Hsu. 2017. A spatially explicit classification model for affective computing in built environments. In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 100–104. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  798. A. Yatsuda, T. Haramaki, and H. Nishino. 2018. A robot gesture framework for watching and alerting the elderly. In International Conference on Network-Based Information Systems. Springer, 132–143. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  799. L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato. 2006. A 3D facial expression database for facial behavior research. In 7th International Conference on Automatic Face and Gesture Recognition (FGR06). IEEE, 211–216. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  800. Z. Yin, M. Zhao, Y. Wang, J. Yang, and J. Zhang. 2017. Recognition of emotions using multimodal physiological signals and an ensemble deep learning model. Comput. Methods Programs Biomed. 140, 93–110. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  801. H. Yu, E. B. Klerman, R. W. Picard, and A. Sano. 2019. Personalized wellbeing prediction using behavioral, physiological and weather data. In 2019 IEEE EMBS International Conference on Biomedical and Health Informatics, BHI 2019—Proceedings. ISBN: 9781728108483. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  802. J. C. Yuille and G. L. Wells. 1991. Concerns about the application of research findings: The issue of ecological validity. In J. Doris (Ed.), The Suggestibility of Children’s Recollections. Jun 1989, Cornell U, Ithaca, NY. American Psychological Association, Washington, DC, 118–128. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  803. A. B. Zadeh, P. P. Liang, S. Poria, E. Cambria, and L.-P. Morency. 2018. Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol. 1: Long Papers), 2236–2246.Google ScholarGoogle ScholarCross RefCross Ref
  804. S. Zafeiriou, A. Papaioannou, I. Kotsia, M. Nicolaou, and G. Zhao. 2016. Facial affect “in-the-wild”: A survey and a new database. In 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 1487–1498. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  805. S. Zafeiriou, D. Kollias, M. A. Nicolaou, A. Papaioannou, G. Zhao, and I. Kotsia. 2017. Aff-wild: Valence and arousal ‘in-the-wild’ challenge. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 1980–1987. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  806. A. Zafiroglu, J. Healey, and T. Plowman. 2012. Navigation to multiple local transportation futures: Cross-interrogating remembered and recorded drives. In Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI’12. Association for Computing Machinery, New York, NY, 139–146. ISBN: 9781450317511. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  807. J. Zaki. 2018. Empathy is a moral force. In K. Gray and J. Graham (Eds.), Atlas of Moral Psychology. Guilford Press, 49–58.Google ScholarGoogle Scholar
  808. M. D. Zeiler, G. W. Taylor, L. Sigal, I. Matthews, and R. Fergus. 2011. Facial expression transfer with input–output temporal restricted Boltzmann machines. In Advances in Neural Information Processing Systems, 1629–1637. Google ScholarGoogle ScholarDigital LibraryDigital Library
  809. Z. Zeng, Y. Hu, M. Liu, Y. Fu, and T. S. Huang. 2006. Training combination strategy of multi-stream fused hidden Markov model for audio-visual affect recognition. In Proceedings of the 14th ACM International Conference on Multimedia. ACM, 65–68. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  810. Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang. 2008. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31, 1, 39–58. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  811. C. Zhang and Z. Zhang. 2010. A Survey of Recent Advances in Face Detection. Technical report MSR-TR-2010-66. https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/facedetsurvey.pdf.Google ScholarGoogle Scholar
  812. F. Zhang, B. Zhou, L. Liu, Y. Liu, H. H. Fung, H. Lin, and C. Ratti. December. 2018a. Measuring human perceptions of a large-scale urban region using machine learning. Landsc. Urban Plan. 180, 148–160. ISSN: 0169-2046. http://www.sciencedirect.com/science/article/pii/S0169204618308545. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  813. G. Zhang, D. Lu, and H. Liu. 2018b. Strategies to utilize the positive emotional contagion optimally in crowd evacuation. IEEE Trans. Affect. Comput. 11, 4, 708–721. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  814. Y. Zhang, F. Weninger, S. Björn, and R. Picard. 2019. Holistic affect recognition using PaNDA: Paralinguistic non-metric dimensional analysis. IEEE Trans. Affect. Comput. 1. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  815. D. Zhang, S. Mishra, E. Brynjolfsson, J. Etchemendy, D. Ganguli, B. Grosz, T. Lyons, J. Manyika, J. C. Niebles, M. Sellitto, Y. Shoham, J. Clark, and R. Perrault. 2021. The AI Index 2021 Annual Report. AI Index Steering Committee, Human-Centered AI Institute, Stanford University. https://aiindex.stanford.edu/wp-content/uploads/2021/03/2021-AI-Index-Report_Master.pdf.Google ScholarGoogle Scholar
  816. R. Zhao, T. Sinha, A. W. Black, and J. Cassell. 2016. Socially-aware virtual agents: Automatically assessing dyadic rapport from temporal patterns of behavior. In International Conference on Intelligent Virtual Agents. Springer, 218–233. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  817. M. Zhao, F. Adib, and D. Katabi. August. 2018. Emotion recognition using wireless signals. Commun. ACM 61, 9, 91–100. ISSN: 0001-0782. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  818. J. Zhao, R. Li, J. Liang, S. Chen, and Q. Jin. 2019a. Adversarial domain adaption for multi-cultural dimensional emotion recognition in dyadic interactions. In Proceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop. 37–45. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  819. P. Zhao, C. X. Lu, J. Wang, C. Chen, W. Wang, N. Trigoni, and A. Markham. 2019b. mID: Tracking and identifying people with millimeter wave radar. In 2019 15th International Conference on Distributed Computing in Sensor Systems (DCOSS). 33–40. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  820. S. Zhou and L. Tian. 2020. Would you help a sad robot? Influence of robots’ emotional expressions on human–multi-robot collaboration. In 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 1243–1250. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  821. X. Zhu, W.-L. Zheng, B.-L. Lu, X. Chen, S. Chen, and C. Wang. 2014. EOG-based drowsiness detection using convolutional neural networks. In 2014 International Joint Conference on Neural Networks (IJCNN). IEEE, 128–134. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  822. S. Zuboff. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs. Google ScholarGoogle ScholarDigital LibraryDigital Library

Cited By

  1. Urbanik P and Mikulecky P (2025). Smart Cities as Affective Environments Progress in Artificial Intelligence, 10.1007/978-3-031-73497-7_13, (154-165),
  2. Klęczek K, Rice A and Alimardani M (2024). Robots as Mental Health Coaches: A Study of Emotional Responses to Technology-Assisted Stress Management Tasks Using Physiological Signals, Sensors, 10.3390/s24134032, 24:13, (4032)
  3. Mishra D, Deshpande S, Anna M and Tiwari A (2024). Exploring the Ethical Dimensions and Societal Consequences of Affective Computing Affective Computing for Social Good, 10.1007/978-3-031-63821-3_5, (91-105),
  4. Uddin M, Zamzmi G and Canavan S Cooperative Learning for Personalized Context-Aware Pain Assessment From Wearable Data, IEEE Journal of Biomedical and Health Informatics, 10.1109/JBHI.2023.3294903, 27:11, (5260-5271)
  5. D’Amelio T, Bruno N, Bugnon L, Zamberlan F and Tagliazucchi E (2023). Affective Computing as a Tool for Understanding Emotion Dynamics from Physiology: A Predictive Modeling Study of Arousal and Valence 2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), 10.1109/ACIIW59127.2023.10388155, 979-8-3503-2745-8, (1-7)
  6. Bonilla-Huerta E, Morales-Caporal R, Sánchez-Lucero E, Hernández-Hernández C and González-Meneses Y (2022). Hybrid Model Recognition and Classification of Human Emotions in Thermal Images, Proceedings of the Technical University of Sofia, 10.47978/TUS.2022.72.03.004, 72:3, Online publication date: 14-Nov-2022.
  7. Surov I (2022). Opening the Black Box: Finding Osgood’s Semantic Factors in Word2vec SpaceОткрытие чёрного ящика: Извлечение семантических факторов Осгуда из языковой модели word2vec, Informatics and AutomationИнформатика и автоматизация, 10.15622/ia.21.5.3, 21:5, (916-936)
  8. Stappen L, Baird A, Lienhart M, Bätz A and Schuller B (2022). An Estimation of Online Video User Engagement From Features of Time- and Value-Continuous, Dimensional Emotions, Frontiers in Computer Science, 10.3389/fcomp.2022.773154, 4
  9. Champion E (2022). What Have We Learnt from Game–Style Interaction? Playing with the Past: Into the Future, 10.1007/978-3-031-10932-4_5, (93-137),
Contributors
  • Monash University
  • Monash University
  • IBM Research - Zurich
  • Utah State University
  • Adobe Inc.
  • Rice University
Index terms have been assigned to the content through auto-classification.

Recommendations