[1]
|
彭聃龄(2018). 普通心理学. 北京师范大学出版社.
|
[2]
|
Bazo, D., Vaidyanathan, R., Lentz, A., & Melhuish, C. (2010). Design and Testing of a Hybrid Expressive Face for a Humanoid Robot. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 5317-5322). IEEE. https://doi.org/10.1109/IROS.2010.5651469
|
[3]
|
Beck, A., Cañamero, L., & Bard, K. A. (2010). Towards an Affect Space for Robots to Display Emotional Body Language. In 19th IEEE International Symposium on Robot and Human Interactive Communication (pp. 12-15). IEEE. https://doi.org/10.1109/ROMAN.2010.5598649
|
[4]
|
Beck, A., Stevens, B., Bard, K. A., &Cañamero, L. (2012). Emotional Body Language Displayed by Artificial Agents. ACM Transactions on Interactive Intelligent Systems, 2, 1-29. https://doi.org/10.1145/2133366.2133368
|
[5]
|
Becker-Asano, C., & Ishiguro, H. (2011). Evaluating Facial Displays of Emotion for the Android Robot Geminoid F. In 2011 IEEE Workshop on Affective Computational Intelligence (WACI) (pp. 1-8). IEEE. https://doi.org/10.1109/WACI.2011.5953147
|
[6]
|
Beer, J. M., Fisk, A. D., & Rogers, W. A. (2009). Emotion Recognition of Virtual Agents Facial Expressions: The Effects of Age and Emotion Intensity. Proceeding of the Human Factors and Ergonomics Society Annual Meeting, 53, 131-135. https://doi.org/10.1177/154193120905300205
|
[7]
|
Beer, J. M., Fisk, A. D., & Rogers, W. A. (2010). Recognizing Emotion in Virtual Agent, Synthetic Human, and Human Facial Expressions. Human Factors & Ergonomics Society Annual Meeting Proceedings, 54, 2388-2392. https://doi.org/10.1177/154193121005402806
|
[8]
|
Beer, J. M., Smarr, C. A., Fisk, A. D., & Rogers, W. A. (2015). Younger and Older Users’ Recognition of Virtual Agent Facial Expressions. International Journal of Human-Computer Studies, 75, 1-20. https://doi.org/10.1016/j.ijhcs.2014.11.005
|
[9]
|
Broadbent, E., Kumar, V., Li, X., Sollers III, J., Stafford, R. Q., & Wegner, D. M. (2013). Robots with Display Screens: A Robot with a More Humanlike Face Display Is Perceived to Have More Mind and a Better Personality. PLOS ONE, 8, e72589. https://doi.org/10.1371/journal.pone.0072589
|
[10]
|
Ceha, J., Chhibber, N., Goh, J., McDonald, C., Oudeyer, P., Kulić, D., & Law, E. (2019). Expression of Curiosity in Social Robots: Design, Perception, and Effects on Behaviour. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-12). Association for Computing Machinery. https://doi.org/10.1145/3290605.3300636
|
[11]
|
Costa, S., Brunete, A., Bae, B. C., & Mavridis, N. (2018). Emotional Storytelling Using Virtual and Robotic Agents. International Journal of Humanoid Robotics, 15, Article ID: 185006. https://doi.org/10.1142/S0219843618500068
|
[12]
|
Dyck, M., Winbeck, M., Leiberg, S., Chen, Y., Gur, R. C., & Mathiak, K. (2008). Recognition Profile of Emotions in Natural and Virtual Faces. PLOS ONE, 3, e3628. https://doi.org/10.1371/journal.pone.0003628
|
[13]
|
Fabri, M., Moore, D. J., & Hobbs, D. (2002). Expressive Agents: Non-Verbal Communication in Collaborative Virtual Environments. https://www.researchgate.net/publication/238689318
|
[14]
|
Hamacher, A., Bianchi-Berthouze, N., Pipe, A. G., & Kerstin, E. (2016). Believing in Bert: Using Expressive Communication to Enhance Trust and Counteract Operational Error in Physical Human-Robot Interaction. In IEEE International Symposium on Robot and Human Interactive Communication (pp. 493-500). IEEE. https://doi.org/10.1109/ROMAN.2016.7745163
|
[15]
|
Hofree, G., Ruvolo, P., Bartlett, M. S., &Winkielman, P. (2014). Bridging the Mechanical and the Human Mind: Spontaneous Mimicry of a Physically Present Android. PLOS ONE, 9, e99934. https://doi.org/10.1371/journal.pone.0099934
|
[16]
|
Hortensius, R., Hekele, F., & Cross, E. S. (2018). The Perception of Emotion in Artificial Agents. IEEE Transactions on Cognitive and Developmental Systems, 10, 852-864. https://doi.org/10.1109/TCDS.2018.2826921
|
[17]
|
Hosseinpanah, A., Krämer, N. C., & Straβmann, C. (2018). Empathy for Everyone? The Effect Of age When Evaluating a Virtual Agent. HAI’18: Proceedings of the 6th International Conference on Human-Agent Interaction, 15-18 December 2018, 184-190. https://doi.org/10.1145/3284432.3284442
|
[18]
|
Ishi, C. T., Minato, T., & Ishiguro, H. (2019). Analysis and Generation of Laughter Motions, and Evaluation in an Android Robot. APSIPA Transactions on Signal and Information Processing, 8, e6. https://doi.org/10.1017/ATSIP.2018.32
|
[19]
|
Jack, R. E., & Schyns, P. G. (2017). Toward a Social Psychophysics of Face Communication. Annual Review of Psychology, 68, 269-297. https://doi.org/10.1146/annurev-psych-010416-044242
|
[20]
|
Joyal, C. C., Jacob, L., Cigna, M. H., Guay, J. P., & Renaud, P. (2014). Virtual Faces Expressing Emotions: An Initial Concomitant and Construct Validity Study. Frontiers in Human Neuroscience, 8, Article 96929. https://doi.org/10.3389/fnhum.2014.00787
|
[21]
|
Krämer, N. C., Simons, N., & Kopp, S. (2007). The Effects of an Embodied Agent’S Nonverbal Behavior on User’s Evaluation and Behavioural Mimicry. In C., Pelachaud, J. C., Martin, E., André, G., Chollet, K. Karpouzis, & D. Pelé (Eds.), Intelligent Virtual Agents (pp. 238-251). Springer. https://doi.org/10.1007/978-3-540-74997-4_22
|
[22]
|
Krämer, N., Kopp, S., Becker-Asano, C., & Sommer, N. (2013). Smile and the World Will Smile with You—The Effects of a Virtual Agent’s Smile on Users’ Evaluation and Behavior. International Journal of Human-Computer Studies, 71, 335-349. https://doi.org/10.1016/j.ijhcs.2012.09.006
|
[23]
|
Krumhuber, E., Manstead, A., Cosker, D., Marshall, D., & Rosin, P. L. (2008). Effects of Dynamic Attributes of Smiles in Human and Synthetic Faces: A Simulated Job Interview Setting. Journal of Nonverbal Behavior, 33, 1-15. https://doi.org/10.1007/s10919-008-0056-8
|
[24]
|
Lazzeri, N., Mazzei, D., Greco, A., Rotesi, A., Lanatà, A., & De Rossi, D. E. (2015). Can a Humanoid Face Be Expressive? A Psychophysiological Investigation. Frontiers in Bioengineering and Biotechnology, 3, Article 64. https://doi.org/10.3389/fbioe.2015.00064
|
[25]
|
Li, J. (2015). The Benefit of Being Physically Present: A Survey of Experimental Works Comparing Copresent Robots, Telepresent Robots and Virtual Agents. International Journal of Human-Computer Studies, 77, 23-37. https://doi.org/10.1016/j.ijhcs.2015.01.001
|
[26]
|
Maha, S., Friederike, E., Katharina, R., Stefan, K., & Frank, J. (2013). To Err Is Human(-Like): Effects of Robot Gesture on Perceived Anthropomorphism and Likability. International Journal of Social Robotics, 5, 313-323. https://doi.org/10.1007/s12369-013-0196-9
|
[27]
|
Mattheij, R., Postma-Nilsenová, M., & Postma, E. (2015). Mirror Mirror on the Wall. Journal of Ambient Intelligence and Smart Environments, 7, 121-132. https://doi.org/10.3233/AIS-150311
|
[28]
|
Milcent, A., Geslin, E., Kadri, A., & Richir, S. (2019). Expressive Virtual Human: Impact of Expressive Wrinkles and Pupillary Size on Emotion Recognition. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (pp. 215-217). Association for Computing Machinery. https://doi.org/10.1145/3308532.3329446
|
[29]
|
Miwa, H., Itoh, K., Ito, D., Takanobu, H., & Takanishi, A. (2004). Design and Control of 9-Dofs Emotion Expression Humanoid Arm. In IEEE International Conference on Robotics and Automation (pp. 128-133). IEEE. https://doi.org/10.1109/ROBOT.2004.1307140
|
[30]
|
Mollahosseini, A., Abdollahi, H., Sweeny, T. D., Cole, R., & Mahoor, M. H. (2018). Role of Embodiment and Presence in Human Perception of Robots’ Facial Cues. International Journal of Human-Computer Studies, 116, 25-39. https://doi.org/10.1016/j.ijhcs.2018.04.005
|
[31]
|
Nadel, J., Simon, M., Canet, P., Soussignan, R., Blancard, P., Cañamero, L., & Gaussier, P. (2006). Human Responses to an Expressive Robot. Proceedings of the Sixth International Workshop on Epigenetic Robotics, 128, 79-86.
|
[32]
|
Novikova, J., & Watts, L. (2014). A Design Model of Emotional Body Expressions in Non-Humanoid Robots. In 2nd International Conference on Human-Agent Interaction (pp. 353-360). Association for Computing Machinery. https://doi.org/10.1145/2658861.2658892
|
[33]
|
Numata, T., Asa, Y., Kitagaki, T., Hashimoto, T., & Karasawa, K. (2019). Young and Elderly Users’ Emotion Recognition of Dynamically Formed Expressions Made by a Non-Human Virtual Agent. In Proceedings of the 7th International Conference on Human-Agent Interaction (pp. 253-255). Association for Computing Machinery. https://doi.org/10.1145/3349537.3352783
|
[34]
|
Numata, T., Sato, H., Asa, Y., Koike, T., Miyata, K., Nakagawa, E., & Sadato, N. (2020). Achieving Affective Human-Virtual Agent Communication by Enabling Virtual Agents to Imitate Positive Expressions. Scientific Reports, 10, Article No. 5977. https://doi.org/10.1038/s41598-020-62870-7
|
[35]
|
Ochs, M., Niewiadomski, R., & Pelachaud, C. (2010). How a Virtual Agent Should Smile? Morphological and Dynamic Characteristics of Virtual Agent’S Smiles. In J., Allbeck, N., Badler, T., Bickmore, C., Pelachaud, & A. Safonova (Eds.), Intelligent Virtual Agents (pp. 427-440). Springer. https://doi.org/10.1007/978-3-642-15892-6_47
|
[36]
|
Pais, A. L., Argall, B. D., & Billard, A. G. (2013). Assessing Interaction Dynamics in the Context of Robot Programming by Demonstration. International Journal of Social Robotics, 5, 477-490. https://doi.org/10.1007/s12369-013-0204-0
|
[37]
|
Pelachaud, C. (2009). Modelling Multimodal Expression of Emotion in a Virtual Agent. Philosophical Transactions Biological Sciences, 364, 3539-3548. https://doi.org/10.1098/rstb.2009.0186
|
[38]
|
Perugia, G., Paetzel-Prüssman, M., Hupont, I., Varni, G., Chetouani, M., Peters, C. E., & Castellano, G. (2021). Does the Goal Matter? Emotion Recognition Tasks Can Change the Social Value of Facial Mimicry towards Artificial Agents. Frontiers in Robotics and AI, 8, Article 699090. https://doi.org/10.3389/frobt.2021.699090
|
[39]
|
Philip, L., Martin, J. C., & Clavel, C. (2018). Rapid Facial Reactions in Response to Facial Expressions of Emotion Displayed by Real versus Virtual Faces. i-Perception, 9, 1-18. https://doi.org/10.1177/2041669518786527
|
[40]
|
Raffard, S., Bortolon, C., Khoramshahi, M., Salesse, R. N., Burca, M., Marin, L., & Capdevielle, D. (2016). Humanoid Robots versus Humans: How Is Emotional Valence of Facial Expressions Recognized by Individuals with Schizophrenia? An Exploratory Study. Schizophrenia Research, 176, 506-513. https://doi.org/10.1016/j.schres.2016.06.001
|
[41]
|
Randhavane, T., Bera, A., Kapsaskis, K., Sheth, R., Gray, K., & Manocha, D. (2019). EVA: Generating Emotional Behavior of Virtual Agents Using Expressive Features of Gait and Gaze. In ACM Symposium on Applied Perception 2019 (pp. 1-10). Association for Computing Machinery. https://doi.org/10.1145/3343036.3343129
|
[42]
|
Rehm, M., & André, E. (2005). Catch Me If You Can: Exploring Lying Agents in Social Settings. In Proceedings of International Joint Conference on Autonomous Agents and Multiagent Systems (pp. 937-944). Association for Computing Machinery. https://doi.org/10.1145/1082473.1082615
|
[43]
|
Rizzo, A. A., Neumann, U., Enciso, R., Fidaleo, D., & Noh, J. Y. (2001). Performance-Driven Facial Animation: Basic Research on Human Judgments of Emotional State in Facial Avatars. Cyberpsychology & Behavior, 4, 471-487. https://doi.org/10.1089/109493101750527033
|
[44]
|
Ruijten, P. A. M., Midden, C. J. H., & Ham, J. (2013). I Didn’t Know That Virtual Agent Was Angry at Me: Investigating Effects of Gaze Direction on Emotion Recognition and Evaluation. In S., Berkovsky, & J. Freyne (Eds.), Persuasive Technology (pp. 192-197). Springer. https://doi.org/10.1007/978-3-642-37157-8_23
|
[45]
|
Shayganfar, M., Rich, C., & Sidner, C. L. (2012). A Design Methodology for Expressing Emotion on Robot Faces. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 4577-4583). IEEE. https://doi.org/10.1109/IROS.2012.6385901
|
[46]
|
Spencer-Smith, J., Wild, H., Innes-Ker, Å. H., Townsend, J., Duffy, C., Edwards, C., Paik, J. W. et al. (2001). Making Faces: Creating Three-Dimensional Parameterized Models of Facial Expression. Behavior Research Methods, Instruments, & Computers, 33, 115-123. https://doi.org/10.3758/BF03195356
|
[47]
|
Tsiourti, C., Weiss, A., Wac, K., & Vincze, M. (2019). Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (in) Congruence on Emotion Recognition and Attitudes towards Robots. International Journal of Social Robotics, 11, 555-573. https://doi.org/10.1007/s12369-019-00524-z
|
[48]
|
Van de Perre, G., Cao, H. L, De Beir, A., Esteban, P. G., Lefeber, D., & Vanderborght, B. (2018). Generic Method for Generating Blended Gestures and Affective Functional Behaviors for Social Robots. Autonomous Robots, 42, 569-580. https://doi.org/10.1007/s10514-017-9650-0
|
[49]
|
Wolfert, P., Robinson, N., & Belpaeme, T. (2022). A Review of Evaluation Practices of Gesture Generation in Embodied Conversational Agents. IEEE Transactions on Human-Machine Systems, 52, 379-389.
|
[50]
|
Xu, J., Broekens, J., Hindriks, K, V., &Neerincx, M. (2013). The Relative Importance and Interrelations between Behavior Parameters for Robots’ Mood Expression. In Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII) (pp. 558-563). IEEE. https://doi.org/10.1109/ACII.2013.98
|
[51]
|
Youssef, A. B., Chollet, M., Jones, H., Sabouret, N., Pelachaud, C., & Ochs, M. (2015). Towards a Socially Adaptive Virtual Agent. In W. P., Brinkman, J., Broekens, & D. Heylen (Eds.), Intelligent Virtual Agents (pp. 3-16). Springer. https://doi.org/10.1007/978-3-319-21996-7_1
|