The practice of modeling social emotions has benefited from interdisciplinary engagements with other fields in the hard and human sciences; however, perspectives from cultural and social anthropology have been limited. This has at times... more
The practice of modeling social emotions has benefited from interdisciplinary engagements with other fields in the hard and human sciences; however, perspectives from cultural and social anthropology have been limited. This has at times resulted in the integration of emotion theories into emotion modeling that emphasize the universal communicability of social signals of emotion at the expense of accounting for cultural diversity evidenced in the ethnographic record. This paper outlines methods and findings of a collaborative effort between cultural anthropologists and engineers to create platforms for interdisciplinary communication and emotion modeling practices more sensitive to cultural diversity and better protected from risks of ethnic, racial, and ethnocentric bias. The paper presents five principles for applying anthropological perspectives to emotion modeling and ultimately argues for a consideration of design strategies for social signal processing based on recent ethnographic evidence of evolving human-robot relationships in Japan.
Keywords—affect, affective robotics, cultural anthropology, emotion, multispecies society, social emotion, social signal processing, thick description
Human-Robot Interaction requires coordination strategies that allow human and artificial agencies to interpret and interleave their actions. In this paper we consider the potential of artificial emotions to serve as coordination devices... more
Human-Robot Interaction requires coordination strategies that allow human and artificial agencies to interpret and interleave their actions. In this paper we consider the potential of artificial emotions to serve as coordination devices in human-robot teams. We propose an approach for modelling action selection based on artificial emotions and signalling a robot’s internal state to human team member. We describe an architecture that drives the display of artificial emotional gestures with a model of latched internal emotional states. We also present preliminary data on human recognition rates for a candidate set of artificial emotional expressions in a Lego robot.
It is hypothesised here that there exist two classes of emotions; driving and satisfying emotions. Driving emotions significantly increase the internal activity of the brain and result in the agent seeking to minimise its emotional state... more
It is hypothesised here that there exist two classes of emotions; driving and satisfying emotions. Driving emotions significantly increase the internal activity of the brain and result in the agent seeking to minimise its emotional state by performing actions that it would not otherwise do. Satisfying emotions decrease internal activity and encourage the agent to continue its current behaviour to maintain its emotional state. It is theorised that neuromodulators act as simple yet high impact signals to either agitate or calm specific neural networks. This results in what we can define as either driving or satisfying emotions. The plausibility of this hypothesis is tested in this paper using feed-forward networks of leaky integrate-and-fire neurons.
Robotic emotional expressions could benefit social communication between humans and robots, if the cues such expressions contain were to be intelligible to human observers. In this paper, we present a design framework for modelling... more
Robotic emotional expressions could benefit social communication between humans and robots, if the cues such expressions contain were to be intelligible to human observers. In this paper, we present a design framework for modelling emotionally expressive robotic movements. The framework combines approach-avoidance with Shape and Effort dimensions, derived from Laban, and makes use of anatomical body planes that are general to both humanoid and non-humanoid
body forms. An experimental validation study is reported with 34 participants rating an implementation of five expressive behaviours on a non-humanoid robotic platform. The results demonstrate that such expressions can encode basic
emotional information, in that the parameters of the proposed design model can convey the meaning of emotional dimensions of valence, arousal and dominance. The framework thus creates a basis for implementing a set of emotional expressions that are appropriately adapted to contexts of humanrobot
joint activity.
Taking neuromodulation as a mechanism underlying emotions , this paper investigates how such a mechanism can bias an artificial neural network towards exploration of new courses of action, as seems to be the case in positive emotions, or... more
Taking neuromodulation as a mechanism underlying emotions , this paper investigates how such a mechanism can bias an artificial neural network towards exploration of new courses of action, as seems to be the case in positive emotions, or exploitation of known possibilities, as in negative emotions such as predatory fear. We use neural networks of spiking leaky integrate-and-fire neurons acting as minimal disturbance systems, and test them with continuous actions. The networks have to balance the activations of all their output neurons concurrently. We have found that having the middle layer modulate the output layer helps balance the activations of the output neurons. A second discovery is that when the network is modulated in this way, it performs better at tasks requiring the exploitation of actions that are found to be rewarding. This is complementary to previous findings where having the input layer modulate the middle layer biases the network towards exploration of alternative actions. We conclude that a network can be biased towards either exploration of exploitation depending on which layers are being modulated.
A key aspect of the sociability of robots is their ability to collaborate with humans in the same environment. There are many challenges in achieving a successful collaboration between robots and humans. Existing computational models of... more
A key aspect of the sociability of robots is their ability to collaborate with humans in the same environment. There are many challenges in achieving a successful collaboration between robots and humans. Existing computational models of collaboration explain some of the important concepts underlying collaboration; such as the presence of a reason for the collaborators commitment, and the necessity of communicating about Mental State in order to maintain progress over the course of a collaboration. The most prominent existing collaboration theories explain only the structure of a collaboration. Although existing collaboration theories explain the important elements of a collaboration structure, the underlying processes required to dynamically create, use, and maintain the elements of this structure are largely unexplained. Our insight is that in many collaborative situations acknowledging or ignoring a collaborator's emotions can maintain or impede progress of the collaboration. For many such situations, prominent collaboration theories do not explain what processes are required and how they can maintain the collaboration structure and subsequently the collaboration progress. This implies that collaborative agents need to employ a series of processes that (1) can use the information in the collaboration structure to evaluate different aspects of the collaboration status, and (2) can influence the collaboration structure when required.
This thesis work develops a new affect-driven computational framework to achieve such objectives which can empower collaborative agents to be better collaborators. This thesis makes the following contributions: (1) This work is built on the combination of SharedPlans theory of collaboration and cognitive appraisal theory of emotions, as such, we contribute by introducing Affective Motivational Collaboration theory which contains theoretical concepts incorporating key notions of both Shared Plans and cognitive appraisal theories in a dyadic collaboration context. Applying cognitive appraisal theory in the collaboration context is novel. Other models of the appraisal theory have not paid attention to the dynamics of the collaboration.
(2) The development of new computational models and algorithms for Affective Motivational Collaboration framework. Our work contributes computational models and algorithms that compute the value of appraisal variables based on the collaboration structure in a dyadic collaboration. Reciprocally, we use the evaluative nature of the appraisal to make changes to the collaboration structure as required. We have also developed a new algorithm for emotion-driven goal management in the context of collaboration. This part of our work shows how appraisal components of the self and the human collaborator contribute to goal management as an emotion function.
(3) Implementation of a computational system based on Affective Motivational Collaboration theory. In order to evaluate our computational models and algorithms within an interaction with human collaborators, we needed an overall functional system to perceive, process and act in a collaborative environment. We have implemented a computational system which employs our models and algorithms. The emphasis of the implementation is on the underlying cognitive processes of collaboration and appraisal, however the implementation also includes the Perception and Action mechanisms.
(4) Validation of Affective Motivational Collaboration theory. We have conducted two user studies a) to validate our appraisal algorithms before further development of our framework, and b) to investigate the overall functionality of our framework within an end-to-end system evaluation with participants and a robot.
This thesis is concerned with Interactive Digital Storytelling (IDS) from two perspectives. Firstly, it analyses the problem of the development of IDS systems and defines behavioral requirements for the agents in those systems. The first... more
This thesis is concerned with Interactive Digital Storytelling (IDS) from two perspectives. Firstly, it analyses the problem of the development of IDS systems and defines behavioral requirements for the agents in those systems. The first part is concluded with a proposal of a minimalistic affect-modulated architecture that can be used to develop agents for medium-sized IDS systems. Secondly, two working IDS systems built on top of the architecture are introduced and used in the following part of the thesis that researches the problem of an automatic analysis of story spaces generated by IDS systems. A general methodology of analysis is introduced, implemented and tested on the domains of three working IDS systems.
We propose mapping of neuromodulators on computational processes that could be used as framework for affective computation based on Lovheim “Cube of emotions” [17] , Plutchik “Wheel of emotions” [25], Tomkins “Theory of affects” [13] and... more
We propose mapping of neuromodulators on computational
processes that could be used as framework for affective computation based on Lovheim “Cube of emotions” [17] , Plutchik “Wheel of emotions” [25], Tomkins “Theory of affects” [13] and combine this theories in the cognitive architecture of “The emotion machine” by Marvin Minsky [21]. In other words we propose a mechanism of affects (emotions) as neuromodulatory influence on computational processes.
How can we make machines actually feel emotions? Is there any option to make AI suffer, feel happiness, love, aggression, contempt, awe? Before we could find proper answer to this question we should be take in account several aspects:... more
How can we make machines actually feel emotions? Is there any option to make AI suffer, feel happiness, love, aggression, contempt, awe? Before we could find proper answer to this question we should be take in account several aspects: psychological as overall picture of low level and high level subjective emotions, neurophysiological as low level objective mechanism of brain response and cognitive architecture to put all these approaches in. We propose framework for emotional thinking im-
plementation in computational spiking neurons based on Lovheim “Cube of emotions” , Plutchik “Wheel of emotions”, Tomkins “Theory of affects” and combine this theories in the cognitive architecture environment of “The emotion machine” of Marvin Minsky. We propose mechanisms of emotions influence on computational processes via neuromodulation.