Because of its multiple levels and how it integrates stable and dynamic components of personality, McAdams and Pals’ framework [
53] appeared especially useful for our aim of systematizing the existing robot personality research and for suggesting directions for future research beyond trait-based approaches.
4 We developed the definitions and details of the four levels of the
Integrative Framework (IF) of Robot Personality with the particularities of robot design in mind and about existing research in HRI and SR. The levels we propose should be considered on their terms, not as an attempt at direct translation from McAdams and Pals. Based on a crucial insight into differences between human and robot personality, McAdams and Pals’
Principle 1, has become
Level 1 of the IF; and
Principle 5, the differentiating role of culture, has an analog in Section
3.5, but its role is different in IF than in Reference [
53], as a result of the same insight that led us to define the new
Level 1 of our framework. This insight comes from a core assumption that underpins our work: Unlike humans and other biological agents, robots cannot be considered a “natural kind”; their designs are not a product of a natural evolution, nor do they follow some kind of deterministic process.
5 Since robots do not evolve, the first level of the IF is a replacement, more than an adaptation of McAdams and Pals’
Principle 1. Where the principle of evolution and human nature creates a background for human personality, our
Level 1 (Section
3.1) is on a more equal footing with the other levels. Because, regardless of the level, robots and their personalities are necessarily socially and culturally constructed and situated; we cannot speak about the (material) “essence” of robots in quite the same way, as we can for humans. With the IF, we make an effort not to assume that concepts of human personality will have a direct referent in robots. In other words, when developing our framework, we remained keenly aware how what makes up robot personality and how it is interpreted remains open, situation-dependent, and culturally specific.
3.1 Level 1: Fundamental Underpinnings of Robot Personality
We propose that the first level of the Integrative Framework of Robot Personality Research concerns the fundamental components of robot materiality.
From an engineering and design perspective, these fundamental components create affordances for different personality capacities; these are the robot’s features, like its hardware and its morphological design, which play a role in each of the other personality levels. In addition to the robot-centered side, users contribute another fundamental building block of robot personality, namely, certain fundamental socio-cognitive processes of the human mind, such as sociomorphing and the tendency to anthropomorphize [
84]. These components are essential in shaping the experience of a robot as to some degree
social.
6We propose that one of these fundamental components of robot materiality concerns
the technological embodiment of the robot.
7 This includes: the hardware that a robot runs on, the kind of sensors and actuators it has. The technological embodiment—though not immediately related to the construction of a robot’s personality as instantiated at other levels of the IF—is nevertheless crucial, because it defines and constrains a robot’s capacities, the way it interacts with the world, the fluidity and intuitiveness (for the user) of these interactions, and, consequently, which capacity for personality, as embodied in design features, it can support [
80]. For instance, a robot operating on an Arduino will differ consequentially from a robot running on high-performance processors. Similarly, a robot that can detect and respond to a user’s gaze may afford a richer and more intuitive interaction than a robot that only has more limited opportunities for interaction (e.g., only a touchscreen) [
97].
In addition to technological embodiment,
physical and morphological embodiment is another fundamental component of robot materiality and its role in shaping robot personality, as it is commonly understood in HRI and social robotics. As demonstrated convincingly by Robert Jr. et al. in the systematic review of studies evaluating aspects of robot embodiment, it is an important vehicle for communication, acceptance, and engagement [
14]. The key premises about robot embodiment, as supported by empirical evidence [
29,
98], are thus: (i) a robot having a (physical) body is significant for HRI, (ii) robot embodiment suggests expectations both about robot functionality and sociality. Simply put: Both a robot having a body and the shape this body has will elicit different responses in humans. Hwang et al. provide further evidence from a study that used 27 different shapes of robots (including visual representations and physical prototypes) to explore whether any of them aroused affective responses in humans [
32]. Not only did the study conclude that certain robot bodies elicited particular emotions, but also that these and the Big Five personality traits were perceived more strongly through present physical prototypes than through the images of the same robots. More recently, Dennler et al. developed an open-source database of 165 robot embodiments and assessed initial expectations that these elicit through Mechanical Turk by asking participants to describe the robots by using metaphors to evaluate the gender expression and to assign tasks to these robots to probe functional expectations [
15].
On the other side of the coin are the socio-cognitive processes elicited by affordances of a robot’s technological and morphological embodiments, especially those processes that are more cross-situationally and cross-culturally stable than those probed in studies mentioned above. Whereas assigning a task to a robot can be seen as quite a high-level, deliberative socio-cognitive process, the kinds of processes we refer to are basic, automatic mechanisms of human sociality. In HRI and SR, research into
anthropomorphism and
sociomorphing has produced a variety of key insights into these mechanisms. Commonly, anthropomorphism in HRI and SR refers to the tendency to attribute human characteristics to non-human agents, including robots [
11,
60]. The core socio-cognitive mechanism behind anthropomorphism is related to how humans use existing knowledge representations to make inferences about non-human agents [
60]. This process is automatic, but cognitively penetrable, meaning we may intentionally revise assigning human attributes to non-human agents/objects upon deliberation on the situation [
90]. As a dispositional trait in humans, anthropomorphism is expressed to different degrees by different people and in different situations. Building on the theoretical and empirical work in cognitive psychology and SR, Nicolas and Wykowska have provided evidence that among the factors that contribute to a person’s tendency to anthropomorphize the need for cognition (an individual’s will to engage in reflective processes) and need for closure (need for understanding and prediction of non-human behavior) will play an important role [
60].
In addition to the challenge of differentiating and assessing empirically the individual mechanisms that contribute to a person’s tendency to anthropomorphize, other ongoing work in HRI and SR suggests that anthropomorphism may not exhaust all various cognate tendencies that people engage when interacting with non-human agents such as robots. The notion of
sociomorphing complements and extends the ongoing work in HRI on anthropomorphism; sociomorphing refers to the process when humans engage a mental model of their interaction partner, in which this model can stem from the experience of human-human interactions (underlying mechanism of anthropomorphism), but it need not necessarily [
74]. The motivation behind this is the recognition that a finer conceptual and methodological differentiation is needed to account for various instances of humans attributing social capacities to (social) robots [
10,
11,
74]. Rather than being an “all-or-nothing affair,” anthropomorphism is one form that sociomorphing can take alongside other types of
experienced sociality [
11]. The Descriptive
Ontology of Asymmetric Interactions (OASIS) framework by References [
10,
11] makes this theoretical premise available for empirical evaluation and integrates conceptual resources to anchor forms of sociomorphing and experienced sociality.
This human/user side of the fundamental underpinnings of robot personality is crucial to keep in mind when designing robots; it is what leads us to argue that robot personality emerges at the intersection, in the interaction of robots and humans. A growing body of research in HRI and SR testifies that robotic designs constitute a “minimal form of context, modulating the effect [of the robot] at the dispositional level” [p.10][
60]. For example, the closer the morphology of a robot to humans, the more likely it is to activate the human action-perception system that leads people to project human-like characteristics, including personality traits, to it [
102]. Similarly, a robot being able to detect human gaze plays a role in shaping intuitive interactions [
97]. Lorenz et al. also emphasize movement synchrony and reciprocity in HRI as a common ground that also supports higher-level mechanisms of interaction [
48].
Suggested Directions of Research.
We agree with Reference [
60] that understanding HRI better means we also have to understand how the basic mechanisms of human cognition and psychology “interact [with], impact or [are] impacted by robots” [p. 11]. We see a fruitful direction of research in further development of conceptual and methodological tools for studying (i) what it is about robots that taps into basic human socio-cognitive processes, (ii) which human socio-cognitive processes play what kind of role, in different situations or at different ages, for example, Reference [
58]. The crucial open questions for the IF concern how robot designs and human cognitive processes at this fundamental level shape perceptions of robot personality, i.e., whether they can be intentionally and specifically taken advantage of to build one or another kind of robot personality.
Considering the types of experienced sociality Damholdt et al. found manifest in situated interactions with robots of varying designs [
11], we propose extending research beyond the robots designed intentionally to support and participate in social and affective interactions with humans (the so-called social robot) to include, for example, functional service robots that people also experience as in some capacity social [
16].
New directions for research emerge as people have more time to observe and experience the behavioral capabilities of robotic systems. From existing long-term studies of social robots in everyday life settings, we know that initial fascination with (social) robots decreases over time, as the novelty effect fades [
37,
71,
91]. As pointed out by Robert Jr. et al., the length of interactions is an integral factor for how a robot is perceived [
14]. Outside of laboratory settings characterized by a limited scope and number of interactions, tracking the role of the fundamental components constitutive of robot materiality and the experience of a robot as in some form social agent remains a challenge that is nevertheless worth pursuing.
Another future direction could investigate the opportunities and issues of designing robots that are simple in the computational resources required (i.e., technological embodiment) while offering their human counterparts a particularly engaging platform for robot personality to emerge over long-term interactions. For example, the work on the handcrafted open-source robotic platform Blossom by Suguitan and Hoffman [
81] is a promising step in this direction. According to its creators, design elements of Blossom (e.g., quick assembly mechanism, handcrafted appearance open to customization, tensile mechanisms, and elastic components ensuring organic movements) are conceived with the aim of low barrier-of-entry and resulting in an accessible and customizable robot (ibid.).
3.2 Level 2: Traits
The second level of the Integrative Framework concerns variations in a small set of relatively static robot social and communicative behaviors (i.e., dialogue speed, pitch, proximity to people, etc.) and design features commonly captured under the trait theories in SR and HRI studies of robot personality.
As we discussed above, the core assumption behind robot personality studies informed by trait models is that personality can be mapped onto an externally recognizable set of behaviors and communication cues. For example, Ludewig et al. distinguished two robot personalities—extroverted and conventional—based only on certain verbal and non-verbal characteristics and investigated whether the extroverted robot was associated with higher social acceptability [
49]. They hypothesized that the extroverted robot personality would be associated with the higher social acceptability of the robot. Based on a field study with 194 participants, the authors concluded that the extroverted shopping robot received higher acceptance scores and was perceived to be more extroverted than the standard version.
Thus, the components of robot personality at this level offer more straightforward opportunities in terms of how to design robot personalities. In other words, if technological and morphological embodiments (Level 1) define the capacities of what a robot will be able to do, then the traits as instantiated through (more superficial compared to Level 1
8) robots’ appearance features, dialogue speed, gestures, proxemics, dialogue strategies, and so on, refer to what the robot does and how it does it. Today, designers working on a particular robot might have certain goal traits in mind, which teams of animators and dialogue authors work to achieve.
The challenge on this level is reliably mapping traits to certain behaviors, like speech styles. From a methodological and design perspective, this also raises the topic of user involvement in decision-making about robot design and social and communicative behaviors. Especially relevant are decisions that contribute to establishing a robot “persona” or character that differentiates it from other similar robots. An interesting example comes from the work of Cietto et al., who used a common hobby robotic kit and relied on participatory design methods to co-design, together with children aged 7–8 years old, an educational robot’s appearance and personality [
9]. The open-source robotic platform Blossom, already mentioned above, also explores how to leverage user involvement in crafting the robot’s appearance and behaviors for more sustained interactions [
81].
Suggested Directions of Research.
Within the IF, we propose SR and HRI researchers continue exploring trait models for robot and human personality, but with a renewed, acute sensitivity to the challenges embedded in this approach. For instance, the characteristics of what constitutes an “extrovert” robot in existing studies are limited. We argue that these characteristics are only one aspect of what made participants rate the “extroverted” robot in this study as more likable and joyful to use. These limitations are exacerbated by assuming that human personality traits are always adequate for describing robot personalities. This assumption alone may lead to inconsistent results if research participants must invent a mapping between their experience of robot personalities and pre-selected categories (e.g., the Big Five). In response to this concern, there are exceptions to the
a priori use of human personality traits [
5,
47,
103] that emphasize differences between robots and humans (and computers) and call for studying each in their own right.
One way to advance these efforts could involve deepening a budding participatory approach to researching robot personality traits. This approach would avoid naturalizing robot personality based on one particular understanding of human “nature” in favor of developing a way to think about robot personality that begins from the robots. For example, Weiss et al. studied the adjectives that participants used to talk about companion robots qualitatively [
93]. That way, researchers remained open to understanding robot personality in terms of humans’ experience of robots, rather than through instruments designed to measure human personality. This approach offers a distinctive advantage in that it avoids reinscribing the limitations of personality psychology’s corresponding approaches and instruments onto robot personality research. Future research could delve into how the perceived human-likeness of a robot impacts whether people find human-specific traits or robot-specific traits as more appropriate in a specific interaction context. It is important to note that our proposal does not imply that we assume that one definitive set of robot-specific traits should, or even could, be developed. Instead, it encourages an exploration of the possible robot-centered traits in principle, along with an examination of how robot-centered and context-centered factors determine their relevance. This approach offers a promising avenue for addressing the limitation of solely relying on human-based traits in the interaction design.
In a similar vein, another salient question concerns the tension between evidence for the long-term stability of personality traits and the fact that this does not entail that personality traits remain unchanged across life span [
53], [
6,
66]. It remains unclear how to address the stability or change in the context of sustained interactions with socially interactive robots. Future research from a human-centered perspective might pursue how the perception of robot personality traits changes over time. At the same time, it is worthwhile to continue exploring architectures suitable to enable dynamic robot personalities.
3.3 Level 3: Adaptations
In the third level of the Integrative Framework, we propose to understand adaptations as dynamic changes in a robot’s behaviors as a result of learning about the environment and the user and their preferences and responding by continuous adaptation of verbal and non-verbal behaviors and task performance. Motivations, goals, and desires rooted in a robot also contribute to adaptations by orchestrating the direction of adaptive behaviors. The aim for the development and evaluation of personality constructs at Level 3 is to support the building and maintaining of meaningful, personalized, and
lasting human-robot interactions. This reflects Dautenhahn’s call for individualized robot companions that need to be “socialized and personalized” to meet the emotional, social, and cognitive needs of their owners [
12]. Specifically, Dautenhahn draws from a developmental perspective and the model of dog-human relationships to ground her proposal for the “bringing up” of robots.
Technically, many challenges remain to achieve such sophisticated learning and personalization, in which what needs to be learned, where to look for instances of target behaviors to learn about and how to recognize them, as well as the ideal response behavior may all be unknown. The literature in HRI is rich with different proposals for how to implement this kind of learning, ranging from direct mappings of human personality or task context to robot personality to machine learning approaches. Different models are being explored, including a variety of unsupervised and reinforcement learning approaches. Different research groups develop cognitive architectures, often to address different dimensions of personalized adaptation. Ideally, users themselves are also able to reinforce to personalize robot behaviors, apparent motivations, and the personality they express. In one example, Uchida et al. integrate implicit and explicit user feedback into a single learning model [
89]. To keep abreast of these many developments, Kiderle et al. provide an overview of how different reinforcement learning approaches can be engaged for the task of supporting dynamic adaptations [
38]. They also discuss how neural networks can be used to realize expressive behaviors during interactions by using a data-driven method.
We mentioned above goals, motivations, and desires programmed into a robot as one component that also contributes to robot personality development at Level 3 of the IF. Goals, motivations, and desires constrain and shape the “essence” of a robot’s personality and ensure a kind of trajectory for the behavioral changes of the robot. From the user perspective, this may be perceived as the robot behaving (more) consistently than a robot without such a motivational core, and ideally in a manner that reflects the user’s needs and preferences. Thus, a character emerges for the robot that differentiates it from other similar robots of the “same kind.” The core motivations that shape a robot’s personality are some of the structures that orient deep learning and other models of user desire toward making the robot respond to users in a meaningful way. Because, while model choices and optimizations present their own challenges, the basic challenge, as pointed out above, remains (i) capturing relevant user behaviors and then (ii) having the robot respond in a relevant and consistent manner. Rather than constraining the computational form learning about users should take, what motivations/intentions/goals offer above all is a representation for understanding and defining the robots’ basic attitude toward the user in their relationship. Concrete work on intentions and motivations in robots is being pursued by Hiroshi Ishiguro and colleagues; in Uchida et al., for example, the authors discuss the development of an autonomous dialogue robot intended to support “a symbiotic relationship with humans, where both have their own intentions and desires and infer each other’s ones through dialogue” [p.2][
89]. Inspired by the findings from neuroscience and cognitive science that recognize that (some) human and animal desires are instinctual and need not be explicit, they propose a cognitive architecture where desires are embedded both on the conceptual level (representations) and in the android reflexive behavior. The motivation behind this dual structure is to contribute to “rational selection of behavior,” much like in humans, and to enable the realization of the complex functions of the robot that would allow humans and robots to communicate more successfully and to learn about each other.
Other work has made efforts to design a robot’s root behaviors following one or more behavioral styles of personas [
51,
68]. In a study on inferring intentions from eye-gaze cues with the human-like robot Geminoid, Mutlu et al. conclude that users notice and are helped by coherent behavioral cues that suggest an underlying goal or motivation of the robot [
59]. These are both characteristic adaptations in the sense intended by McAdams and Pals. In another study, Tanevska et al. explored how the cognitive architecture of the humanoid robot iCub can support different user profiles and contribute to the flow of interactions by inscribing different values for the robot’s internal variables at the beginning [
83]. To test the adaptations, three user profiles were identified based on the frequency and modality of interactions—a highly interactive, sparsely interactive, and in-between profile. One important outcome was that even when the robot adaptation was slow-paced, it was possible to observe changes in robot behavior over time. Thus, the robot could demonstrate a capability to progressively adapt in interactions with its users.
Suggested Directions of Research.
While there have been attempts to address how adaptability can be integrated at the level of agent architecture [
83], we know little about how user-adaptive systems will play out in long-term interactions. For instance, in the case of Jibo, a personal assistant robot that learns the user’s habits and preferences regarding the robot’s actions [
52], the robot’s (limited) adaptability did not appear to support long-term engagement [
31]. This was also the case with Anki Vector [
85], Karotz [
13], and Pleo [
21]. One notable example is the robotic seal, Paro. Paro employs a reinforcement learning algorithm to gradually adapt to the user’s preferences. However, it is not clear to what degree the relative success of Paro in its domain of application is supported by the system adaptiveness, in contrast to other features such as, for example, the haptic feedback that the Paro robot provides by the virtue of having a particular kind of material embodiment, as we discussed in Section
3.1.
To address the absence of adaptiveness in long-term interactions, we speculate that directing all efforts toward the pursuit of technical innovations regarding automation may in isolation not be the solution to the fundamental challenge of long-term engagement [
31]. In contrast, with our
Integrative Framework for Robot Personality Research, we propose that the automated adaptation of robots should be complemented by a user-involved design that builds on humans’ needs and narratives. A complementary approach to this is to investigate how people may personalize their robots throughout interactions. This approach stems from the assumption that no design process is ever final. Due to the dynamic and complex nature of HRI, it is impossible to predict everything about the human beings that the robot is designed for. Thus, a promising direction of research is to explore designs that enable people to implement changes and adapt their robots according to their wishes and changing needs [
94].
3.4 Level 4: Narratives about the Robot
Level 4 of the Integrative Framework incorporates narratives about the robot. These include the narratives that people construct about their robots and that converge to a unique robot identity over time, as well as the narratives that designers, developers, and researchers generate as a kind of “back story” that explains the robot’s existence, role, and that may shape the relationship with the users. Within our Integrative Framework for Robot Personality Research, it is these narratives that shape how the robot is “unlike any other robot.”
At the level of narratives, we distinguish those that emerge at design time and during interaction. Designed narratives may reflect something like a back story for a particular robot, its role, character, and relationships with the humans around it. Such narratives can be delivered by the robot itself or by another human, such as a coworker or an experimenter. One poignant example of researchers engaging a narrative to shape HRI is the work by Jacq et al. [
33]. The study centered around the CoWriter activity where the aim was to enable a young participant to teach handwriting to a robot in a complex and rich interaction. Two Nao robots were used, and a narrative script was created to convince the child the robot truly needs help and benefits from the lessons. In this narrative, one of the Nao robots, called Mimi, was away on a scientific mission, and the Nao robot called Clem was communicating with Mimi with handwritten messages just like humans. Coupled with algorithmic adaptation to reflect the challenges each child faced with handwriting, the learning activities designed in this way proved promising in promoting children’s motivation, commitment, and overcoming low confidence.
While designed narratives can be useful to guide design choices and help steer interactions with users, they reflect a weaker understanding of this level of personality than user-driven narratives that arise during interaction. In contrast to the designed narratives, narratives that people construe about robots in situated interactions represent a stronger understanding of narratives. These narratives reflect a dimension of human personality regarding how individuals make meaning of their lives and how they relate to the world [
53]. Thinking in terms of this stronger notion of narratives suggests a shift in emphasis from the designing of things to the designing of meaningful human-technology
relations. Concerning robot personality research, we argue this calls for an extension toward the notion of “identity” as something that is co-constructed and enacted in different configurations of human, technological, and contextual factors.
Methodologically, studies of life narratives are naturalistic and can be challenging, as they are incompatible with hypothesis testing [
14]. Rather than looking for universals or for dependent variables that reliably predict outcomes of human-robot interactions, such as the case with the studies that address personality constructs on Levels 1, 2, and 3, studies that address the narratives that people construe about robots are exploratory and open-ended. For example, Syrdal et al. conducted an insightful study that attempted to provide a narrative frame for long-term (10 weeks) human-robot interaction [
82]. In this study, narrative framing techniques provided a narrative within which participants could interpret their experience of interacting with the robot. As part of providing an ecologically valid setting, this method also explicitly reflected aspects of the culture within which the interaction takes place, including easily overlooked aspects such as the conventional layout of a domestic kitchen. In this situation, the framing took advantage of the human tendency to create narratives and leveraged this as the basis for more robust human-robot interaction.
An additional point to consider is that narratives are not only constructed by humans. Social robots are best understood as co-creators of narratives about interactions they participate in [
87]. In that regard, one practical concern is that an important element of designing engaging social robots may be what narratives robots themselves appear to form about the interaction. In the cases of Cog and Kismet, for example, Turkle et al. identify several narratives that the robots appear to participate in constructing [
86]. This includes “the discourse on aliveness”; Cog seems “wounded” not broken, and Kismet can seem suddenly “deaf.” From our perspective, one central aspect of such interactions is how the robots encourage such interactions by sharing in the narrative.
Suggested Directions of Research.
In psychology, the topic of narrative identity has gained considerable attention [
53]. The HRI community is yet to decide which approaches to studying narrative identity are relevant and appropriate regarding gaining insights into the meaning-making process that enables interactions with and relations to social robots. In robot personality research, one research direction is to address the content themes in narratives that people construct about their robots and examine how these relate to the components of robot personality, as discussed under Levels 1–3. The above-mentioned study of the CoWriter activity with two Nao robots by Jacq et al. [
33] is an excellent example of how the narrative approach could potentially also be integrated in lab studies, e.g., to explore how different narrative scripts impact the perception of a robot’s personality. At the same time, more longitudinal studies in naturalistic environments (e.g., people’s homes or public spaces co-shared with robots) will be invaluable for deepening our understanding of how a unique robot identity emerges in situated interactions over time and which role it plays in the overall acceptance of robotic technologies.
The study of robot personality through the prism of narratives may be developed further by considering whether and how robots can facilitate or create their own narratives and life stories. This is different from the case above, however, in the sense that Cog and Kismet were designed to perpetuate the narratives that Turkle et al. discuss [
86]. Instead, future social robots might be designed with a view towards robots narrating their interactions and personalities more openly. One intriguing attempt in this direction comes from Winfield, who outlines a proposal for an embodied computational model of storytelling for robots [
99]. Per Winfield, if this model were built, then it would open the possibility for investigation of how narratives can emerge from a robot’s interactions with the world and then be shared, as stories, with others.
3.5 The Differentiating Role of Culture
Citing Shweder and Sullivan [
78], McAdams and Pals define culture as “the rich mix of meanings, practices, and discourses about human life that prevail in a given group or society” [
53, p.211]. Whereas HRI typically operationalizes culture as national culture [
77], the basis our proposal has in STS expands the IF to other notions of culture as well. For example, culture can also be understood more locally, as in the case of epistemic cultures, ways of knowing, that shape a particular community’s practices [
8]. The general, constructionist starting point for the IF is simple: All technologies are socially embedded, and scientific research and engineering never exist outside of conventional meanings, practices, and discourses, i.e., culture [
46,
69].
In McAdams and Pals, culture is discussed in terms of how it affects the other levels of personality, each of which it affects differently; for example, culture (i) shapes how and to what extent people in different cultures express their dispositions (Level 2) or (ii) provides a menu from which people chose the narratives in terms of which they frame their own life stories (Level 4) [
53]. Culture also exerts a particular influence on each level of robot personality, as, in many respects, the HRI and SR communities already recognize [
19,
46]; culture influences peoples’ perceptions and attitudes toward robots [
3,
30]; and cultural norms are imprinted in robot designs, defining appropriate behaviors, expressions, and the context for their interpretations [
43,
44]. The difference to McAdams and Pals is that for robots, no matter which level of personality we are referring to, it is a question of how the cultural background of a
human shapes what
robot personality is being constructed at design-time and during interactions.
Concerning the stable features of robot personality (Level 1), Lee et al. point out differences across cultural backgrounds in how people interpret robots and what they expect from the look and feel of the robot [
44]. This captures elements of the overall design concept and embodiment of the robot, including its shape, gender, materials, and size. Lee and Sabanovic concluded that culturally variable perceptions of robots are fundamentally related to particular norms and social dynamics, rather than being reducible to more direct factors such as media exposure or religious beliefs [
43]. For designing robot personalities, this means that design choices and shape, form, and character of the robot should be expected to play a different role for people with different cultural backgrounds.
Prior work also shows that the interpretation of robots’ personality traits (e.g., how extroverted/introverted they are perceived to be—Level 2) is shaped by culture. For example, Weiss et al. investigated task-dependence and cultural background dependence of the socially interactive robot personality trait attribution and found that cultural background mediates how traits are attributed [
95]. In a living room scenario with 28 people and a robot, Woods et al. found specific effects on the perception of robot personality based on participants’ gender, age, and technological background [
100]. Since gender, at least, is a strongly culturally mediated criterion, it seems useful to keep in mind ways that culture shapes perceptions of robot personality through the way it shapes users’ beliefs about themselves and their relationship to the robot.
The word
adaptation (Level 3) offers a curious overlap of two meanings: (i) changing one’s behavior (adapting) to accommodate a particular situation and (ii) McAdams and Pals’ sense of adaptation, which is a question of values. When we study robot personality, these meanings are often entangled, like in one in-the-field study, which showed that a robot was able to improve its performance on a collaborative task when it changed its behavior (i) in response to the information it obtained about humans’ cultural background, in which their values (ii) are implicit [
73]. This suggests it may be important for robots to adhere to cultural norms in certain situations by dynamically adapting to social rules, to a certain extent. Doing this raises a variety of challenges, including technical ones about how to learn cultural norms effectively [
72,
92]. Rather than designing robots for particular cultural settings, Li et al. investigated how robots could be adapted to better suit different sets of norms and expectations [
45]. In this study, the authors outline relative priorities among norms for various cultures, providing the basis for culturally sensitive adaptations to robot behavior. One example the authors give is the importance, in different cultures (nationalities), of the robot complying with social conventions. This compliance appears to be more important, on average, for Chinese participants than for American participants, independent of the relative strength of social norms. In another study to explore culturally sensitive adaptations, Evers et al. found different effects of the strength of in-group feeling for Chinese and American participants, on average [
19]. In this study, Chinese participants, as compared with the US subjects, were more comfortable when an assistant robot was characterized as a strong in-group member. In future work, robots may be designed to be flexible in these respects, leaving it up to successive interactions with users to facilitate the robot’s adaptation to its environment.
Concerning narratives (Level 4), one set of studies provides insights that stand in contrast to the limited success of first-generation social robots. A trend has emerged in Japan to hold funeral services for AIBO robot dogs, who are treated as aging relatives, and robot repair shops have come to think of themselves as “clinics” [
40]. One feature of users’ narratives at work here is the animism inherent in Japanese Buddhist culture, which may lead to a radically different set of narratives about robot identity. However, Reference [
43] suggests this is too superficial a perspective. Reference [
69] develops a deeper perspective that brings out how cultural models of social behavior, and cultural models of cognition and technology together contribute to the developing narratives of Japan’s robot culture. In this context especially, reducing culture to nationality is too superficial a level of analysis. Lee and Sabanovic, for example, point to differences between “tight” versus “loose” cultures instead [
43], about the relative strength of social norms and forms of sanctioning deviant behavior (tight culture/high strength+sanctions; loose culture/low strength+sanctions) [
26].
A final aspect to consider on the subject of culture is a critical stance on the values we design into robots. The particular challenge this raises for social robot design is how to avoid reinscribing prejudice or reinforcing discrimination by, for example, carelessly or harmfully gendering in robot design. One issue concerning gendered physical embodiment and personality is the association of humanoid robots with a stereotypically female form with traits understood as being stereotypically feminine [
101]. Contrary to current practice, though, humanoid social robots have the potential to dismantle gender norms by participating in the construction of a different narrative [
67]. The relations between robot designs, personality, and ideas of how these should “naturally” be mapped onto one another, which was discussed at the beginning of Section
3, offers one site for intervention. Currently, such interventions are mostly being undertaken in the realm of (performance) art, but there has been early work here that offers a starting point for ambitious, critical, and emancipatory robots, like Sontopski creating a stereotypical-feminine-typed voice assistant that resists abusive behavior [
79].