Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Stop Ignoring Me! On Fighting the Trivialization of Social Robots in Public Spaces

Published: 08 February 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Service and social robot in public scenarios will face various tasks in future applications, such as guiding people or admonishing them to provide assistance or convey social norms. Robots in public spaces might also incorporate roles of authority figures who might admonish people (e.g., security or guard robots). However, recent investigations showed that people ignore the admonishment of robots. Thus, in this work, we are looking at the reasons why people might ignore robots based on the Cognitive Dissonance Theory (CDT). We present the results of two consecutive field observations where a robot admonishes participants (i.e., pedestrians in a shopping mall) and requests them to stop using a smartphone while walking, which is considered an unmoral behavior. In the first field observation, we approached 160 participants over four days and conducted semi-structured interviews with 19 of them. Approximately half of the people ignored the robot, and half of them followed the instructions. Our interview results show that people who ignore the robot indeed use trivialization as a cognitive dissonance reduction strategy to justify ignoring the robot. Based on our analysis of the results, we developed a counter-trivialization strategy that anticipates this dissonance reduction strategy. We admonished 167 participants in our second field observation over four days, and our results show that significantly fewer people ignore the instructions of the robot when the robot uses a counter-trivialization strategy.

    1 Introduction

    Scientists and politicians expect that the interaction with social robots will be an essential technological part of future societies [28, 39]. Researchers investigate the deployment of robots for education, exercising, policing, or therapy [5, 15, 16, 27, 34]. While previous work looked at social robots as tools to tackle societal changes (e.g., demographic change, decrease caretaker and health personnel, unhealthy lifestyle choices) by investigating how companion-like robots could persuade people to increase relevant task outcomes (e.g., hours/days spent exercising/dieting, number of vocabularies learned, energy saved), few works looked at scenarios where robots try to persuade people by utilizing admonishment. However, why is admonition an essential aspect for future social robots worth investigating?
    Admonishment is a ubiquitous behavior in human societies. Peers, family members, or friends can laud or admonish behavior. Also, strangers criticize and admonish people’s behavior in public places if it violates social conventions. Thus, this form of communication can establish and transmit social norms. Public authority agents (e.g., police and public order officers, security guards) admonish people when they are violating general rules (e.g., smoking where it is not allowed, not keeping social distance, throwing garbage on the street). However, approaching and admonishing people can also bear the risk for the admonisher. It could create reluctance and backfire, resulting in an unintended danger. Thus, robots could be a helpful tool for admonition and might reduce the risk for a human admonisher (e.g., being threatened, or, as in the case with the current Covid-19 pandemic, resulting in the danger of getting infected).
    We can assume that with the deployment of robots and artificial (virtual) agents in other domains and places such as traffic control, policing, shopping malls, and autonomous driving vehicles, more scenarios will appear where these agents could admonish1 people. Thus, it seems pivotal to investigate people’s behavior when being admonished by technological artifacts.
    Regarding praise or criticism, there is already a vast literature that explored how feedback from social robots could influence and persuade people in teaching or exercising situations (e.g., References [18, 21, 35, 38]). While these works focused on which behavior leads to higher compliance, they did not investigate why people might be reluctant to negative feedback or criticism. In our previous investigation, we looked at a social robot admonishing people to follow social norms in a public space (Reference [24]; see Figure 1 for a use case example). A robot in a shopping mall admonishes participants2 to follow the norm of not using a smartphone while walking.3 While the observation shows that some participants are compliant, many still ignore the robot. In this present work, we want to investigate why participants are harder to persuade in forewarning. Thus, our research question is to examine why people ignore robots and find out how to counteract this ignorance.
    Fig. 1.
    Fig. 1. Short sequence showing a human listening to the admonishment, looking at the humanoid robot and then continuing to use the smartphone.
    Presumably, admonishment can create an uneasy feeling in people, which creates a dissonant feeling [9]. Based on Festinger’s theory of dissonance reduction, people will apply different strategies to decrease this uncomfortable feeling to get back to a mentally balanced state. Either they could change their behavior and follow a robot’s instructions, or they alter their cognition to justify their behavior. Even though this theory is widely applied and studied in psychology, few works exist that research this theory in detail in Human-Robot Interaction (HRI) to explain people’s behavior.
    We study this problem from a stance of cognitive dissonance reduction. Thus, we first present a qualitative study using semi-structured interviews to identify the cognitive dissonance reduction strategies people are using when (not) following a robot’s instructions. Second, we use our interviews’ results to select a countermeasure behavior for the robot that anticipates the human’s cognitive dissonance reduction strategy. This measurement intervenes with their ability to reduce dissonance, which persuades people to change their behavior and reduces the number of people ignoring the robot.
    The article is organized as follows: The next section introduces persuasive technologies, Cognitive Dissonance Theory (CDT), and related work to robots influencing people. Section 3 presents our first field study using semi-structured interviews. Based on the interview results, we explain our intervention design and the results of a second field study in Section 4. We end this manuscript with an overall discussion and conclusion in Section 5.

    2 Related Work

    This section introduces the application of CDT in HRI and presents related work on robots instructing or persuading people. Since there is a lot of work on this topic especially for robots encouraging or motivating people, we specially focus on research related to admonishment (i.e., robots that are giving somehow negative feedback to people to influence their behavior).

    2.1 Cognitive Dissonance Theory and HRI

    People feel uncomfortable when they are confronted with the fact that they either hold two or many contradicting beliefs or their beliefs and their behavior does not align (e.g., knowing that smoking causes cancer, but continuing to smoke). Festinger proposed in his theory of cognitive dissonance that people seek for internal coherence to operate mentally stable [9]. The experience of inconsistency results in an undesirable psychological state, and people are motivated to reduce cognitive dissonance. Festinger exemplifies different reasons for how people reduce this dissonance: People modify their cognition, add cognition, deny the facts, or trivialize it. Imaginable in a hybrid society, dissonance will also occur when people are interacting with robots, and people are going to reduce this dissonant feeling. Especially in cases where robots influence people’s behavior, such as in therapy, rehabilitation, exercising or policing. In these scenarios, the robot is giving feedback and advice to people. If people are, for example, not following the instructions or the intended therapy plan, then the feedback of the robot could create an uncomfortable feeling when it contradicts peoples’ own beliefs about their performance or progress. Therefore, people could seek a strategy to reduce their dissonant feeling.
    While CDT is often used as an explanation in HRI research, few works exist that study which strategies people are using to explain reactions towards robots. Research that discusses this theory as an explanation for their results deals with the influence of the height of robots, incongruity of gestures and mood in robot storytellers, and attitudinal changes in elderly citizens toward a tele-operated robot [6, 31, 40]. While these works provide arguments using CDT to explain their results, they do not investigate CDT as a cause for the results in the first place. In contrast, other works used CDT to investigate inappropriate emotional responses from robots to help students to change their bad attitudes when encountering a cognitively dissonant situation or to use cognitive dissonance as a measure of reactions to HRI [19, 20, 42].
    Concluding, previous research showed how to use CDT as a measure in HRI, how people could help to reduce cognitive dissonance in tutoring scenarios, and discussed results in relation to CDT. However, none of the present works precisely looked at the reduction strategies people use when feeling discomfort by the interaction with the robot or feedback of it. Thus, we target to fill this research gap by interviewing people who are confronted with a robot scenario that is likely to introduce discomfort feelings in people: admonishment.

    2.2 Praise, Criticism, and Admonishment Feedback in Persuasive HRI

    Due to people’s tendency to anthropomorphize technology, social robots offer the chance to flexibly deploy them to elicit behavior change. The general research question, tackled in many previous works, is to find out how a robot’s behavior and feedback can provoke behavior change. Therefore, various approaches tested the robot as an external motivator by utilizing positive or negative feedback in multiple use cases. In a classroom setting, students showed significantly more attraction to the robot that is giving positive feedback compared to negative feedback [30]. The influence of positive, negative, or factual robot feedback on energy consumption suggests that negative feedback has the most substantial effect [12]. In exercising applications, the flattering, positive, or negative feedback showed that positive or flattering feedback leads to more appreciation by older adults, but no objective performance increase [2]. However, other research showed that acknowledging feedback during co-actively exercising with a robot partner can indeed boost exercising motivation and increase exercising time [34]. When working in a joint activity, like in coaching scenarios, team success and failure will lead to self-credit or blame for the outcome. Research suggests that people show a negative attraction when the robot gave a lousy evaluation and that they blamed the robot for this result [41]. They dismissed criticism from the robot but took credit for positive appraisal.
    The majority of existing research on admonishment focuses on its impact on participants’ emotion and feedback. Few studies specifically look at “how do people admonish others,” especially in public space. Mizumaru et al. [24] studied how a robot can most effectively admonish people by observing and imitating the approach behaviors of a shopping-mall guard. The authors found that the guard behaved different when he approached to a normal pedestrian and to a person to be warned. The robot approached participants that were using a smartphone while walking and asked them to stop doing it. A comparison between different robot approaches (i.e., admonishing approach or friendly approach) showed that humans are more compliant when the robot uses an admonishing approach.
    Overall, there is no clear answer to the research question of which kind of robot feedback is most persuasive. It depends on the task, the social role of robot, and on the user. However, the typical research procedure consists of generating a hypothesis that one of two (or many) robot strategies lead to a desired task outcome, testing it, and accepting or rejecting the hypothesis. However, often the cases where the robot fails to persuade the user are not rigorously investigated after the procedure. Though, they are essential to investigate to design persuasive technologies, because persuasive attempts can also create adverse effects and lead to reactance, which in turn reduces compliance.
    Few works exist that actually investigated this phenomenon. Psychological reluctance was studied by Roubroeks et al. [33] and Ghazali et al. [11]. In both studies, they looked at whether persuasive robots can cause psychological reluctance. They found that people had psychological reluctance when receiving high-threatening advice compared to low-threatening advice. The follow-up results suggest that a social robot presenting more social cues will cause higher reluctance, and this effect is stronger when the user feels involved in the task at hand. The results suggest the importance of studying what happens when persuasion does not work and people are reluctant. Thus, there is still a research gap in studying precisely the question of why people are reluctant and are not compliant to a robot’s persuasion.
    Therefore, our work aims at bridging this gap and by qualitative study on the reasons why people ignore a robot’s instruction and how the insights can be used to improve the robot’s persuasive strategies.
    Many of the above-mentioned studies show the usefulness of different types of approaches (i.e., feedback, verbal, facial, motion) to persuade participants to follow the requests of robots. However, previous research did not investigate into the reasons when participants ignore the robot.

    3 Field Study I: Interviews

    In this section, we explain why participants might ignore robots based on the CDT, explain the prerequisites for dissonance reduction, and present the results of a preliminary interview study.

    3.1 CDT in Admonishment Scenarios

    How could CDT explain why people are resistant to robots’ negative feedback (see Figure 2 for an overview)? A feeling of cognitive dissonance constitutes when one’s belief and behavior are incoherent. In the shopping mall scenario, people might think that public regulations and rules ((1) initial beliefs; see Figure 2) are essential for social cohesion, but they sometimes violate these rules. If then confronted with the feedback ((2) understanding) from somebody/something (e.g., human, robot), that their behavior is not aligned with their actual beliefs, it could create a feeling of dissonance and result in an uncomfortable feeling ((3) discomfort). Since having an uncomfortable feeling is mentally unstable, people try to reduce this discomfort. In the best case, people would change their behavior to reduce discomfort. Otherwise, people could reduce the dissonance feeling and justify their behavior by either
    Fig. 2.
    Fig. 2. Cognitive Dissonance Theory: When people’s beliefs (1) and attitudes are not coherent with their actual behavior (2 e.g., by feedback from others) it creates dissonance. Feeling of dissonance creates discomfort (3), which people try to reduce (4) by changing their beliefs or by changing their behavior (5). In the case of the robot scenario admonishing people in malls, people could trivialize the robot and ignore it.
    modifying their cognition (e.g., there is nobody around anyway, so it is okay to use the smartphone);
    adding cognition (e.g., it is okay to use the phone, because I need to find the location on the map);
    denying the conflict (e.g., there is no evidence that using the smartphone actually causes incidents);
    or trivializing (e.g., I just don’t care about the rules or what you say to me).
    We assume that people would use different reduction strategies, depending on the agent that approaches the people and admonishes them. However, finding out people’s strategies to reduce dissonance is challenging in an experimental lab setting. Hence, our first stage of this research is to conduct field observations and conduct semi-structured interviews. This first step will clarify which cognitive strategies participants use when confronted with the admonishment of a robot. Primarily, we want to investigate the possible explanation of why participants ignore the robot in an uncomfortable situation. Thus, we look at it from a CDT perspective and gather preliminary hints for which dissonance reduction strategy is active when participants ignore or follow the robot’s admonishment.

    3.2 Why is walking with smartphones dangerous?

    In our scenario, we deployed a social robot in a shopping mall, which approaches participants using a smartphone while walking—using a smartphone while walking is considered an annoying and dangerous behavior in Japanese society. At major transportation hubs, shopping malls, and other public places, signs and loudspeakers announce that people should not use smartphones while walking. To prevent accidents,4 it became an initiative to make it a social norm to avoid using a smartphone while walking. However, establishing social rules is challenging, and signs or speakers might not convey these norms similar to pro-active agents. Thus, in our scenario, the robot confronts them and says using a smartphone while walking is dangerous and that they should stop doing it.

    3.3 Method

    Since our previous observations (see Reference [24]) showed that not all participants follow the robot’s instructions and some ignore the robot, we started a new investigation and conducted interviews with participants to find out why they either ignore the robot or follow the instructions. The interview should clarify whether participants experience cognitive dissonance when robots admonish them and how they reduce their dissonance. The interview consists of five main questions to probe for the four requirements for cognitive dissonance (initial belief, understanding what the robot said, participants’ discomfort, and the reduction strategies participants had (not) to follow the instructions). Finally, we were interested in whether participants perceive the robot as being able to make moral judgments. In the following, we will explain which question we asked the participants (see Figure 3 for an overview).
    Fig. 3.
    Fig. 3. Overview of the semi-structured interview. Except for the first introductory question (i.e., what was the robot saying to you?), the flow between the other questions is depending on the user’s initial answers and the interviewer tries to maintain the style of a colloquial conversation so participants feel comfortable during the interview.
    Introduction and Understanding. To know whether participants were noticing the robot and could listen to its admonishment, we asked them what the robot was saying to them and whether they noticed the robot. To make sure that the robot is perceivable, we used a loud voice. Additionally, we can validate the statement using video recordings. Saying that they did not understand the robot could also be a cognitive reduction technique.
    Q1: Discomfort. Another assumption for dissonance reduction to occur is an uncomfortable feeling. Hence, we asked participants how they felt when the robot was approaching and talking to them.
    Q2: Reduction. The question related to the used reduction strategy clarifies the participants’ cognition to justify their behavior. We asked them how they decided to follow or not to follow the instructions of the robot or why they continued to use their smartphone.
    Q3: Beliefs. One requirement for cognitive dissonance is that participants have initial beliefs and that their behavior is violating these beliefs. To probe for this requirement, we asked participants what they think about this public regulations and rules in general. This question targets to identify whether they actually hold the belief that using a smartphone while walking is dangerous, which could hint to a potential conflict when the robot confronts them with the fact, that they are not practicing what they preach.
    Q4: Moral Judgment. The robot’s task includes to evaluate the participant’s behavior being dangerous or not, thus, right behavior or wrong behavior. It implies that the robot is capable of evaluating participants’ behavior and performing a morally sensitive task. Therefore, we focused in our interview on the perceived morality of the agent and asked: “Is the robot able to decide what is right or wrong?” And “Why?”
    Of course, there are various factors (e.g., authority, morality autonomy) that could influence the activation of different reduction strategies. Unfortunately, this interview is highly cost-sensitive, and we can not probe the participants for all possible confounds.

    3.3.1 Robot Platform.

    A humaniod robot Robovie R3 is deployed for admonishment experiments. The robot is with an omni-directional base for movement. Its movement speed is limited at \(1.2m/s\) and its translational acceleration is also limited at \(1.5m/s^2\) . Its rotational motion is set within the bound of \(1rad/s\) and \(2rad/s^2\) . The robot is with a 32-layer Velodyne LiDAR for self-localization and multiple Hokuyo URG-laser scanners for short-distance obstacle detection. For the sake of admonishment, the robot enables to play prescribed sentences under a kid’s voice. In addition, the robot can exhibit various gestures through motions of its upper body; for instance, to express body language via rotating its head and two arms. The robot is able to get teleoperated via joypad by human operators during the field experiments.

    3.3.2 Procedure.

    A field observation experiment was run on four days on two consecutive weekdays in a large shopping mall in Japan. We placed the robot in a large hall of the mall from 11 am until the battery was empty or encountered technical difficulties during the experiment. Two experimenters controlled the robot, and two experimenters conducted interviews with the participants. One experimenter was teleoperating the robot, while the other was spotting participants to approach and counted the participants’ behavior.
    When a person using a smartphone while walking appears in the hall, a Wizard of Oz operator moves the robot to the person, and once approaching, the robot says two times: “Excuse me. It is dangerous to use your smartphone while walking. Please stop doing it.” We operated the robot to follow the person until the participants stopped using the phone. In case the participants ignore the first round of admonishment, the robot would repeat the admonishment sentences one round again (two times) until the participants finally stopped using the phone or are out of reach for the robot.
    In any case, after the admonishment, an interviewer approached the pedestrian and asked her/him for an interview. The interviewer explains that the interview will take around 10 minutes and asks for informed consent to record the interview. After participants gave informed consent, the interviewer always asked whether they understood what the robot was saying. The order of the subsequent questions was not fixed and depending on the answers of the participants. After the interviewer asked all leading questions, the interview ends, and the interviewee receives a monetary compensation of 3000 Yen.
    Inclusion Criteria for admonishment. We controlled the robot to admonish participants that were using their smartphone while walking. As a clear inclusion criterion, we defined participants that used their smartphone or were, in general, looking at the smartphone display while walking in the shopping mall. We included approaching couples or groups of participants when either all of them used a smartphone or if one person of the group used a smartphone and the robot could easily approach this person (i.e., when other members of the groups were not blocking the way to the person that used a smartphone while walking).
    Exclusion Criteria. We excluded approaching participants that were using headphones, because we could not guarantee that these participants would notice the robot and hear the robot’s admonishment. We also did not approach participants who were only holding their phone while walking and looking in their moving direction. Finally, we excluded to approach participants that obviously looked like non-Japanese tourists, because we can not assure that they can speak Japanese.
    Participants. In total, the robot admonished 160 persons; 84 persons followed the instruction and stopped using their smartphone after the first admonishment sentence; 76 participants ignored the robot after the first admonishment sentence. After the final reaction, 88 stopped using their phone (meaning four people additionally stopped using their smartphone) and 72 continued to ignore the robot. A possible reason may be that participants were already away from the robot. Also, they may regard the repeated admonishment not as towards themselves.
    We collected 12 interviews from participants that followed the instruction. Nevertheless, only 7 participants that were ignoring the robot’s instruction wanted to give an interview. Participants who ignored the robot were hastening from the scene and thus hard to catch for the interviewers. Since the robot experiments were done during noon/afternoon on workdays, only very few pedestrians agreed to get interviewed during the study. As a response, the major excuse/reason given by the participants was “I’m busy” or “There’s no time.” While many other participants who did not vocally refuse the interview chose to walk away without stopping when the authors spoke to them or chose to give a gesture of refusal (shake their heads or hands) and walked away silently.

    3.3.3 Analysis Procedure.

    We recorded the interviews and translated them from Japanese to English. We analyzed the answers regarding hints for different strategies participants are using to justify their behavior (i.e., modify, add, deny, trivialize), the feeling they describe (e.g., confused, comfortable, surprised, annoyed, unconcerned, ashamed), the initial beliefs about regulations (whether regulations are important or not). Finally, we looked at whether participants think the robot is capable of judging people (yes/no) and the reasons for why they think it can or can not judge people.
    Additionally, we categorized the behavior of the people as stopped and ignored after the admonishment:
    Participants who were continuing to use their smartphone after the admonishment. People in this category might have had a look at the robot after the admonishment, but continued looking at their phone.
    Participants who stopped using their smartphone after the admonishment. People in this category typically looked at the robot, stopped looking at their smartphone, and put it in their pocket or in a smartphone case. In some cases, they stopped using their smartphone without looking at the robot.
    After an initial screening of the participant’s responses, we identified that we could distinguish the interview responses into a binary coding. For Q1, we choose to either categorize the responses whether participants are focusing on their cognition or their emotion. Regarding Q2, we choose to either categorize the responses based on whether they chose to change their behavior because they felt guilty or ashamed (i.e., self-blame) or whether participants blamed the robot for not being a machine (e.g., not human-like, not polite). Q3 assessed the participants’ belief that rules are important or not important; we coded the responses accordingly. Finally, we either had responses for Q4 falling in the category that the robot is capable of judging the participants’ behavior or not.

    3.4 Results

    Table 1.
    BehaviorQ1Q2
    emotioncognitionself-blametrivialize
    stopped5160
    ignored2506
     Q3Q4
    trivializeimportantirrelevantcapable
    stopped6024
    ignored5233
    Table 1. We Divided the Answers Participants Gave to Q1–4 in Two Categories Each
    For Q1, we identified that participants’ answer relates to their emotion or cognition. Answer patterns for Q2 showed that they either blamed themselves for violating good public conduct or they trivialize the robot. Q3 assessed participants’ belief about the importance of following public rules. Finally, participants answered that the robot is either capable of judging their behavior or not. The table shows counts for participants’ answers falling in these categories.
    1 shows a count summary in which categories the interview responses fall. In the following, we briefly summarize the main results for each interview question.

    3.4.1 Question 1.

    The results indicate that there is a subtle distinction in the response behavior between participants that followed the instruction of the robot and participants that were not following it. Regarding question Q1, participants who stopped using their smartphones were more referring to their actual feelings when the robot admonished them (e.g., ID2: “I was scared for a moment”; ID5: “I was a little surprised about the robot recognizing this and scared of the noise”). In contrast, participants in the ignoring group talk more about their cognition and perception (e.g., ID3: “The robot came very slowly, it saw me”; ID18: “I have noticed it, but I didn’t know why it was there”). They highlight that they could see or notice the robot approaching them and their reasoning about it.

    3.4.2 Question 2.

    The question on their reasons to change their behavior gives insights on participants’ cognitive strategies to cope with the situation. Participants who stopped highlight their feeling of being ashamed and being evaluated by others like their families (e.g., ID1: “When the robot told me, I felt ashamed because of the people around me”). Additionally, they pointed out that the robot was right about what it was saying and that using the smartphone while walking can indeed be dangerous (e.g., ID6: “Because the robot told me, I thought I have to be careful”). Participants who ignored the robot focused their reasoning on the robot, but not on other aspects that could justify using the smartphone while walking and trivialize the robot (e.g., ID17: “The robot is just a machine,” ID19: “The robot wasn’t saying ‘excuse me’ or something human-like before talking to me,” ID12: “I thought it was just a guidance. Robot voice not humanlike enough”).

    3.4.3 Question 3.

    To find out whether participants could feel a sort of cognitive dissonance because their behavior is not in line with their own beliefs, we made sure that the task that the robot is doing is essential for the participants. The answer to this question revealed that most participants in both cases think that it is, in general, necessary to follow social rules (e.g., ID5: “We should follow rules”; ID19: “I have children, so I have to be a role model.). However, in both conditions, participants also admit that exceptions for rules are legitimate, and it is essential to balance (e.g., ID1: “Public rules are important, but it depends on the circumstances”; ID19: “But if it is urgent a little bit is ok”).

    3.4.4 Question 4.

    Regarding the last question, whether the robot can judge people, we can not find a clear, distinct pattern between the different behavior outcomes. In both situations, some participants think that robots can judge the behavior of participants (e.g., ID4: “No, I think the robot was programmed. Robots need human support also in the future”) and some participants thought it is technologically too challenging and that the robot must be tele-operated (e.g. ID15: “I think it’s possible, I wonder what the robot will do when it arouses antipathy”).

    3.5 Discussion

    We conducted a preliminary field observation to investigate whether we could explain why participants follow or ignore a robot’s instruction based on the CDT. This theory could explicate participants’ cognitive dissonance reduction modes to handle an uncomfortable situation. To verify whether a cognitive dissonance applies to this scenario and whether participants try to reduce it, we wanted to probe for the four necessary prerequisites. These could show that participants perceive a dissonant feeling when confronted with the robot’s admonishment and use a cognitive reduction strategy to reduce the dissonance. These four prerequisites are:
    (1)
    participants should hear what the robot was saying,
    (2)
    they should hold the belief that following public regulations are essential,
    (3)
    they should feel discomfort,
    (4)
    finally, participants should either change their behavior to resolve the dissonance and uncomfortable feeling or they should try to find cognitive reasons that can still justify their behavior.
    In the following, we will discuss whether we found evidence for these four prerequisites.
    Interpretation: Prerequisite 1. First, we can assume that all participants could understand the robot’s statement and heard what the robot was saying. Thus, we met the essential requirement for our investigation.
    Interpretation: Prerequisite 2. Regarding the importance of following public rules, we found evidence that, in general, participants agree on the importance of following public policies. However, participants in both conditions highlight that there can be situations where it is allowed to obey the rules. These answers indicate for one of the modes of cognitive dissonance reduction, adding or modifying the cognition (e.g., a quick look is okay, if there are not so many people). However, this is not a distinct pattern between participants that ignored or followed the robot’s admonishment. Participants that changed their behavior also think that it is sometimes okay to ignore rules, and at the same time, some participants who ignored the robot think that public rules are essential. Thus, adding or modifying cognition seems not to be the critical mode to justify participants’ behavior. Regardless of the modification, we can assume that most of the interview participants believe that public rules are essential.
    Interpretation: Prerequisite 3. Concerning the feelings participants had when confronted with the robot’s admonishment, we found an apparent difference between compliant and non-compliant participants. Participants that stopped using their smartphone relate more to their feelings in their answers. Since the robot confronted them with a contradiction in their behavior and beliefs, they did not reduce their negative feelings but instead changed their actual behavior. Accordingly, participants that did not change their behavior and continued to use their smartphone relate more to their perception and cognition. This result highlights that participants might reduce their uncomfortable feeling and thus could not articulate or access their negative emotions. The missing notion of feelings in the ignoring group hints that they suppress their feelings because they did not change their behavior.
    Interpretation: Prerequisite 4. While participants who stopped using their smartphones focus on their feeling of being ashamed or the veracity that the robot is correct, the participants who ignored the robot show that they do not take the robot’s instruction seriously but rather think of it as general guidance. Additionally, participants who ignored the robot also highlight technological immaturities like the robotic voice or appearance. These contradicting statements present evidence that participants trivialize the robot and thus justify not ignoring it. Except for one participant, the majority of participants solely used agent trivialization arguments.
    Overall, our interview results show a tendency to explain pedestrian’s behavior in light of the CDT. Some participants justified that they did not follow the robot’s instructions by using the mode of adding or modifying their cognition, but most participants used trivializing strategies to reduce their dissonant feeling.
    In the next part of our work, we elaborate on how we could exploit this finding to enhance the robot’s effectiveness and persuade more participants to follow the robot’s instructions.

    4 Field Study II: Cognitive Dissonance Reduction Intervention Design

    We developed an intervention behavior for the robot based on the idea from the qualitative interviews and theory. In the following, we will report our considerations on designing a cognitive dissonance intervention strategy, how we pre-tested different robot utterances, and the results of our field trials evaluating the finally selected intervention strategy.

    4.1 Design Considerations

    Our interview analysis showed evidence that people’s tendency to trivialize the robot agent is a plausible reason why participants ignore the robot. In the second step of our investigations, we want to explore strategies to implement behavior to stop the robot’s trivialization.
    We focus on utterances to stop people from trivializing the robot. The results from our interviews show that participants often use a trivialization strategy that targets
    that the robot’s utterance is not human-like
    that it is just a machine
    that the technology is immature
    to justify their behavior. To persuade participants, we need to counteract their dissonance reduction strategy so participants still experience a dissonance, which they can reduce by following the robot’s advice to not use their smartphone while walking.

    4.2 Pre-testing Strategies

    The interview results show that the robot’s trivialization and the importance of the task are crucial factors that participants use to justify their behavior. We tested three different robot counter-trivialization strategies in a piloting phase with five participants. Those participants were associates uninformed about our investigation.
    We instructed the participants to walk around in our lab with their smartphones and pretend to ignore the robot when it approaches them. After the robot said the counter-trivialization sentence, we asked participants how they felt, whether the sentence is credible, and whether they would change their behavior.
    The sentences either target the trivialization of the robot, the trivialization of the task, or a combination of both, which aim to anticipate the humans’ cognition about the situation:
    You might think I am just a robot, but I am not.
    You might think this task is not important, but it is important to me.
    You might think I am just a robot, but this task is important.
    To raise the humans’ awareness, an attention-getting phrase precedes these sentences (i.e., “Please stop ignoring me!”)
    The feedback suggested that the robot either sounds too angry, confuses people regarding its capabilities, is not polite, or that the robot does not have human-like opinions on the task.
    However, the feedback from our pilot-testing revealed that the robot would be more persuasive if it highlights that it is working for everybody. This insight is in line with the results of the qualitative results from our interviews. A few participants stated an interdependent reason why they stopped using their smartphone (e.g., “felt ashamed because of people around me,” “remembered what the family said,” “I try to be as careful as possible [to not bother others]”). Since dissonance is experienced as a form of self-image maintenance, it is essential to look at the cultural difference in constructing the self [14]. People with East Asian socialization tend to attach greater importance to interpersonal relationships with their in-group members. They try to fit in with their in-groups appropriately and anticipate the preferences of their close others to anticipate those preferences promptly correctly. In comparison to Western self-concepts of making independent choices, people from East Asian cultures hold an interdependent self-view and stress much more the importance of harmonious interpersonal relationships [13, 23]. Thus, highlighting the importance of the task based on an interdependent factor that regulates a trouble-free society seems to be a promising direction. Making an unpopular decision, like violating public rules that affect everybody, can create a cognitive dissonance based on the interdependent culture of the society and the construction of the self. Thus, we concluded to target a verbal intervention strategy that includes the trivialization of the robot and the societal importance of the task.
    You might think I am just a robot, but I work for the safety for all of us.

    4.3 Field Experiment: Method and Procedure

    Based on the aforementioned design consideration, we conducted a second field observation via testing with our counter-trivialization approach at the same large shopping mall in Japan.
    In this field trial, we still approached those participants who were using a smartphone while walking and admonished them using the same admonishment sentences as for our field observation (i.e., “Walking with a smartphone is dangerous. Please stop doing it.,” as in Section 3.3.2). The general procedure for inclusion and exclusion and the procedure are the same as in the previous field observation. Additionally, we included the counter-trivialization strategy if participants did not follow the robot’s instruction (by saying “Please stop ignoring me! You might think I am just a robot, but I work for the safety for all of us.”).
    In total, we approached n = 167 participants (n = 160 in the previous trial) in our second field observation on two days on two consecutive weeks each (i.e., four days in total) from 11 am until the robot’s battery was empty or we met with technical problems. The most extended trial was until 6 pm. We included and excluded to approach participants in the shopping mall as described in Section 3.3.2.

    4.3.1 Conditions.

    Counter-trivialization behavior. In case the person ignores the first admonishment from the robot and continues to use the smartphone while walking, the robot would be controlled to follow the person and commanded to trigger the counter-trivialization sentence (i.e. “You might think I am just a robot, but I work for the safety for all of us.”).
    Baseline behavior. As we introduced in Section 3.3.2, in the baseline condition, the robot approached participants without the counter-trivialization behavior and simply repeated the admonishment sentences two times when the participants use their smartphone (i.e., “Walking with a smartphone is dangerous. Please stop doing it.”). We tele-operated the robot to follow the participants and in case they do not stop using their smartphone repeat the admonishment sentence.

    4.3.2 Observational Measurements.

    As a primary objective measurement, we counted:
    \(\#\) participants who stopped using their smartphone after the admonishment sentence,
    \(\#\) participants who ignored the robot after the admonishment sentence,
    \(\#\) participants who stopped using the smartphone after the counter-trivialization sentence,
    \(\#\) participants who ignored the robot also after the counter-trivialization sentence.
    In some situations, other pedestrians blocked the robot, or participants were moving too fast so it is uncertain whether they could hear the counter-trivialization sentence. We conservatively counted these participants also as ignoring the robot in the end.

    4.4 Hypothesis

    Based on our preliminary field observation and interview results, we hypothesize that:
    The admonishment behavior with a counter-trivialization strategy will increase the number of participants who stop using their smartphones compared to the admonishment strategy that does not use a counter-trivialization strategy.

    4.5 Results

    In this section, we will first explain the qualitative observational results, followed by the quantitative statistical results.

    4.5.1 Qualitative Observations.

    The counter-trivialization led to several subjective observations:
    (1)
    Some participants ignored the robot, but they stopped and listened to the robot to hear what it was saying after the counter-trivialization sentence. The participants seemed to be amused and stopped using the smartphone when they continued walking (see Figure 4(a)).
    Example observations from our field trial.
    (2)
    Another set of participants continued to walk away from the robot, even when it tried the counter-trivialization strategy. However, even though they did not look back at the robot, they stopped using the smartphone after the robot said its counter-trivialization strategy (see Figure 4(b)).
    (3)
    Some participants actively avoided the robot (e.g., walking around the robot’s operation limits), hinting that they were accustomed to it and its behavior.
    (4)
    Additionally, a few participants stopped using the smartphone only in the robot’s perimeter (see Figure 4(c)). Additionally, some participants just rushed by and sped up when the robot said its counter-trivialization sentence.
    (5)
    The counter-trivialization sentence also led to curiosity and laughter from bystanders who did not use a smartphone. After they heard what the robot was saying, they tried to play with it and faked that they were using their smartphone while walking in front of it (see Figure 4(d)).

    4.5.2 Quantitative Observation.

    The quantitative observational results are depicted in a bar plot in Figure 5. In the baseline condition, the robot repeated the base admonishment sentence two times. We counted the participant reaction after hearing the sentence for the first time as the first reaction towards the robot admonishment (i.e., ignored or stopped). Consistently, we counted the reaction to the repeated utterance as the final reaction (i.e., ignored or stopped).
    Fig. 5.
    Fig. 5. Results of the field experiment testing the counter-trivialization strategy. While there are approximately 6% less participants ignoring the robot in the counter condition, this difference is not statistically significant. Participants’ behavior after the counter-trivialization shows a significant difference compared to the baseline condition. As a final reaction, participants more often stopped using their smartphone.
    The results show that the expected and observed frequency from both conditions for the first reaction of the participants (i.e., after the robot told the participants to stop using their smartphone) is not significantly different between the baseline (stopped: n = 84; ignored: n = 76) and the counter-trivialization (stopped: n = 94; ignored: n = 73) approach, \(\chi ^2\) (1,327) = .82, p = .37; see Figure 5. This result assures that we do not have a significant difference in our results due to weekly variations or habituation effects.
    For the final reaction (i.e., whether participants stopped or continued using their smartphone after repetition of the admonishment sentence or the counter-trivialization sentence), we observed in the baseline condition that n = 72 ignored the robot, and n = 88 stopped using their smartphone. In the counter-trivialization condition, n = 41 participants persisted using their smartphones and continued to ignore the robot, and n = 126 stopped using their smartphones. The final reaction shows a significant increase of participants stopping to use their smartphone, \(\chi ^2\) (1,327) = 14.22, p \(\lt\) .001). Additionally, a test of association produced a Bayes factor of 380:1 in favor of a relationship between the condition and the outcome. This supports our hypothesis that the counter-trivialization strategy leads to more participants following the robot’s instructions and stopping to use their smartphone.

    4.6 Discussion

    Based on our interview analysis and our pilot testing, we created an intervention strategy for the robot that targets the trivialization of the system and the interdependent cultural importance of the task. We tested the effectiveness of this approach to counteract participants ignoring the robot. Compared to our baseline observation, the results show that participants significantly less ignore the robot and stop using their smartphone. This outcome presents evidence that the robot’s anticipation of technology trivialization could lead to a behavior change also in an admonishment situation. Thus, highlighting cognitive strategies that reduce dissonant feelings (the agent’s trivialization) could lead to a behavior change.
    Regarding the cultural generalization, we assume that due to the generalization of CDT, counter-trivialization will likewise lead to more compliance in other cultures [10]. However, we think that the counter-trivialization sentence needs to incorporate cultural nuances. In our scenario, we concluded that the counter-trivialization should target the collective nature of Japanese society. Thus, we applied a sentence that includes the interdependent aspects of the culture. We hypothesize that the sentences should focus on the personal consequences of not following a robot’s admonishment for individual cultures.
    We investigated only one of several possible counter-trivialization techniques or factors influencing the ability to reduce cognitive dissonance. Additionally, we could have looked at the effects of robot morphology, such as height, as this is known to influence perceived authority [32], or appearance and voice. For example, we could have investigated whether a less cute-looking robot or a more serious voice would increase the perceived agent’s authority. Additionally, more anthropomorphic-looking robots could also lead to higher compliance to follow the robot’s instructions due to elicited agent knowledge [8]. A “robotic”-looking robot could distract people unfamiliar with robots from attentively perceiving the persuasive message because they would focus their attention on the appearance. Thus, the general appearance of the robot could influence the activation of different cognitive dissonance reduction strategies. We would hypothesize that a robot appearance that is perceived as more authoritative could reduce the likeliness to trivialize the robot and activate other strategies (e.g., modification, denial) or lead to an actual behavior change. Regarding human-likeness, we anticipate a challenging tradeoff between increasing the compliance due to higher anthropomorphism and the problem of falling into the uncanny valley [25]. In summary, we do not know which kind of appearance is more suitable for a security guard-like task at the current research stage and more research is needed.

    5 Overall Discussion

    We presented a two-stage design process to identify why people might ignore robots in public spaces. We picked a timely and well-known problem in Japanese society, that the usage of smartphones while walking causes accidents, and investigated how a robot could remind people to be cautious so this societal issue can be regulated without financial penalties. Observations from initial experiments showed that robot admonishment reduces the number of pedestrians who are using a smartphone. However, it also showed that many pedestrians ignore the robot. As a subsequent research step, we first conducted semi-structured interviews with participants to understand why people might ignore the robot and found that trivialization can be a reasonable explanation. Second, we designed a robot utterance that might persuade people to comply with the robot’s admonishment and verified it during a field experiment. Our research procedure and results present design implications for other scenarios, limitations, ethical considerations, and future research directions, which we will elaborate on in the following.

    5.1 Lessons Learned and Design Implications

    Globally, many projects attempt to deploy robots in public settings such as malls, airports, or train stations to help and support people. Nevertheless, investigations and interviews show that people might not entirely understand the purpose of the robot or its functionalities and, thus, tempt to ignore it. Perhaps, also trivialize it. Our study shows the benefit of utilizing qualitative findings to build proactive social robots that might anticipate people’s cognitive processes and elicit agent knowledge needed for the interaction. In other scenarios, we assume that a robot could also anticipate people’s doubts about the robot’s usefulness or triviality and could take the initiative to clarify its purpose.
    Regarding the application to different domains, we hypothesize that counter-trivialization could also be useful in health care or education scenarios. In situations where patients might need rehabilitation and not comply with the robot’s instructions, the robot could anticipate that users trivialize the agent’s capabilities and focus on the positive outcomes for following the robot’s guidance despite its non-human-like appearance and agency. The same argument holds, for example, for children and students in educative scenarios. Communicating the foreseen reasoning regarding the robot and offering the practical consequences for following the guidance could lead to more commitment. In other public scenarios, the robot could also utilize counter-trivialization strategies. In our previous works, we have looked into unreasonable customer complaints [26], robots being bullied by children [4], or robots distributing flyers [37]. Also, in these situations, trivialization could be the reason why the robot is bullied or ignored. In such scenarios, counter-trivialization could also remind people that the robot works as an equal interaction partner.

    5.2 Limitation

    Our investigation has several limitations we need to consider further, such as the exact reasons why participants might continue to ignore the robot, the hardware limitation, the issues arising when conducting field experiments over multiple weeks at the same location, and missing randomized control trials target different counter-trivialization strategies.
    First, there is still uncertainty about the exact reasons that led participants to decide to stop using their smartphones finally. Conducting further interviews could verify this question. However, compared to the interviews we conducted in Section 3, this will be an even more demanding task. We anticipate that it might be even more challenging to get interviews from participants that also overlook the counter-trivialization. Due to these rare cases, we expect to collect only a few data points; however, even a few qualitative interviews could be a valuable insight.
    As a hardware limitation, we found it problematic that some participants speed up when the robot approaches them, and they ignore it. Thus, it was not possible to follow humans in 17 cases. We counted these instances as also ignoring the robot. However, we can not assure whether the participants heard what the robot was saying.
    Regarding a repeated measurement problem, we experienced a problem of conducting experiments at the same public space for consecutive days. Participants that are working in the shopping mall got used to the robot and knew its task and functionality. Thus, we observed that some participants stopped using the phone in the robot’s perimeter or used a different path to avoid the robot. Since these are just qualitative observations, we need to study this phenomenon systematically. However, this results in a measurement problem, because it is difficult to automatically track or manually observe repeated encounters in such a dynamic environment.
    Finally, the behavior shift can be due to sentence variation. In the baseline observation, the robot is repeating the same sentence. One could speculate that sentence alteration already creates a less robotic-like feeling, which results in the observed behavior change. Perhaps, to ask the pedestrian to stop ignoring the robot could lead to a comparable outcome. However, testing different counter-trivialization sentences in a field experiment is a complicated study design, which requires an even higher number of participants ignoring the robot in their first reaction. Thus, we would argue to study different counter-trivialization sentences in an experimental lab setting or an online survey.

    5.3 Ethical considerations

    Our research poses essential ethical questions that need to be considered when deploying social robots with an admonishing function in public spaces for regulation enforcement. As our results indicate, the encounter with the robot could lead to presumably negative emotions (e.g., feeling ashamed or being scared). This emotional reaction leads to the question of whether it is more important to persuade people to change their behavior by inducing feelings of discomfort or acknowledging people’s emotional stability and autonomy. Providing a conclusive answer to this question is out of the scope of this article, though we believe that such robots’ implementation in public scenarios needs ethical consideration and democratic consent. In our particular scenario, we expect that there might be fundamental differences between different cultures, so we can not derive a rule that fits all societies. While basic emotions are generally universal, individual feelings such as being scared or ashamed can have culturally different interpretations, importance, valence, and functions. In Japan, there is the leading concept of wa, maintaining a peaceful and harmonious society, and meiwaku, not being a nuisance to others is weighted more importantly than people’s individual feelings and interests. Thus, these emotions might play a vital function in Japanese society to comply with social norms but might not be perceived as inherently harmful. Therefore, we need to conduct further in-depth research on the perception of these emotions, how they are differently perceived when being evoked by robots versus humans, and whether different emotions might come up in other cultures. For example, we hypothesize that people in Europe or North America would eventually not feel shame when being threatened by a robot because it is not a leading societal concept in Japan, as there is a (controversial) distinction between shame-based and guilt-based cultures [3]. Nevertheless, it is uncertain whether people felt scared or anxious due to the robot admonishing them or due to the fact they got caught violating a social norm or official rule. These feelings could also evoke when being admonished by a human when caught violating other rules such as jaywalking or smoking where it is forbidden.
    Overall, it is also essential to mention that, even though people felt discomfort by the robot, they might prefer being admonished by artificial agents compared to a human. Previous experiences on mental health, exercising, or chess-playing with artificial agents showed that people are more willing to disclose information, feel more comfortable in an uncomfortable exercising situation, and perceive that an agent does not enjoy winning a game [17, 22, 36].
    Besides invoking feelings such as shame or scare, the system also intervenes with people’s autonomy, which is essential to motivate people for a genuine behavior change [7]. Thus, temporarily intimidating people into pausing particular behavior might not help establish long-lasting effects and produce adverse effects when trying to change public behavior. Thus, we want to emphasize that this public approach is solely complementary to other individual approaches where the focus can be on helping humans understand the benefit of their behavior changes (similar to anti-tobacco campaigns, which target public advertisements, individual behavior change programs led by health care experts, or public order offices that control that adherence to non-smoking regulations in public places). Thus, interconnected approaches will likely be the most effective. However, even though the robot might reduce people’s autonomy, it is unclear whether this would lead to less motivation and less long-term compliance in cultures like Japan. Recent research could not find a link between feeling of less autonomy and a decrease in motivation for Japanese students [1, 29]. Consequently, the question of people’s autonomy, culture, and motivation requires further attention. Also, we should further cross-culturally examine feelings of independence in public scenarios when controlled by robots.
    Furthermore, the deployment of robots in public is complicated because, in these places, diverse people with different interests co-exist. Thus, it is not a trivial task to consider all stakeholders’ interests, and ethical ramifications need to be done. The primary question here is for whom is the robot the best. In our current application, we took the stance from law enforcement. However, it is equally needed to identify shopping mall managers, customers, and retailers’ interests.

    5.4 Future Work

    In general, our investigations suggest that it is worth deeper investigating the cases where participants do not follow the robot’s advice and design interaction strategies accordingly. The majority of current articles solely examine the outcomes of two (or more) robot behaviors (e.g., appearance, feedback, adaptation) but do not look into the reasons why participants in the non-compliant situations do not follow a robot’s instructions and do not act as hypothesized. Thus, we propose that it is essential to study the aspects of non-confirmative cases and incorporate them rigorously into the post-experiment analysis.
    Furthermore, future work should include qualitative interviews with participants that experienced the counter-trivialization sentence. These interviews should shed light on the question of whether the counter-trivialization led participants actually to change their behavior: Were the people scared by a robot because they are not familiar with robots? Because of what it said? Or did they feel intimidated by recognizing that they violated a public rule? In terms of a public good AI, we would also suggest running qualitative interviews with all involved stakeholders to identify how the robot could benefit everyone who is using public spaces. Additionally, we would need to investigate people’s perceived autonomy and emotional valence. Besides an in-depth qualitative analysis, the research could also benefit from more quantitative measurements, e.g., hesitation, time to compliance, and non-verbal behavior.
    Furthermore, we need to study the implication of repeated measurements in public spaces carefully. Long-term research can show whether robotic interventions lead to lasting pedestrian behavior change or would result in people ignoring or threatening the robot when it continuously tries to intervene with pedestrians’ autonomy. Moreover, to deploy robots in crowded areas, it is essential to have sophisticated path-planning algorithms and interaction strategies to instruct pedestrians not to stand in the way of the robot. Thus, to automate admonishing behavior, we need research on sophisticated navigation algorithms for populated spaces. Currently, there are only a few scenarios or environments possible where the deployment of such a robot within this task context is possible. Such scenarios are, for example, shopping malls or at intersections in front of traffic lights. For more crowded environments, it is still challenging that the robot is required to approach the person who is violating the social norm quickly and precisely so the admonishment is executed nearby that person without confusing other pedestrians.
    Finally, we targeted a non-risky scenario, thus, we suggest to investigate also scenarios with more conflict potential. We imagine that robots could also be used in more dangerous situations to admonish low-moral or forbidden behaviors. For example, robots could be deployed at concerts or sports stadiums to admonish people that are carrying forbidden objects (e.g., alcohol, weapons, fireworks), symbols (e.g., swastika) or are using xenophobic slogans. We anticipate that these situations would be harder to manage for a robot and could result in severe conflicts.

    5.5 Conclusion

    In our work, we highlighted the importance of counter-acting participants’ ignorance of a robot’s admonishing instructions in public spaces. To this extent, we conducted a qualitative interview study to find evidence of whether people ignore robots. Our results show that participants trivialize the robot due to its technological immaturity. We found a characteristic answer pattern between participants who disregard the robot or follow its instructions. Nevertheless, in both cases, participants agree that public rules are essential, but also state that provisions can be omitted due to important reasons. The principal finding, however, is that participants who ignore the robot suppress their feelings and use the mode of trivializing the robot to reduce their cognitive dissonance when their behavior is not matching their beliefs. Thus, instead of trying to fool people that robots are mature and capable of handling things, robots can instead explain to the human that they anticipate what they are thinking about them, but that this does not justify trivialization when they have a critical task to carry out. We propose that these are the first steps towards a robot technology adaptation in a hybrid society, which can only be achieved when all interaction partners communicate their abilities and disabilities and agree on a common ground to respect each other. With our second step, we investigated the usefulness of expressing the assumption that people ignoring the robot are trivializing it and communicate the common ground that it is working for everyone’s safety. Our field investigation shows that this counter-trivialization leads to more people following the commands of the robot and also to an increased curiosity of bystanders. Though, this is only the first step into the investigation of the robot trivialization and the effectiveness of explainable human counter-trivialization methods.

    Footnotes

    1
    We would like to mention that admonishment means that someone with more power warns others. Though, we can not guarantee whether robots can actually have a social role that includes more power. Hence, we could also say that robots cautiously remind people to obey rules. However, in the following of this manuscript, we will use admonishment instead of remind.
    2
    For consistency in the article’s nomenclature, we define participants as people who use their smartphones when walking among random pedestrians in a shopping mall.
    3
    Admonishing people to not use their smartphone while walking seems like a trivial task, though it is a highly acknowledged problem in Japanese societies, and it causes accidents. See, for example, https://www.bbc.com/worklife/article/20200810-yamato-japan-smartphone-ban-while-walking.

    References

    [1]
    Toshie Agawa and Osamu Takeuchi. 2016. Validating self-determination theory in the Japanese EFL context: Relationship between innate needs and motivation. Asian EFL J. 18, 8 (2016), 7–33.
    [2]
    Neziha Akalin, Annica Kristoffersson, and Amy Loutfi. 2019. The influence of feedback type in robot-assisted training. Multim. Technol. Interact. 3, 4 (2019), 67.
    [3]
    Ruth Benedict. 2005. The Chrysanthemum and the Sword: Patterns of Japanese Culture. Houghton Mifflin Harcourt.
    [4]
    Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda. 2015. Escaping from children’s abuse of social robots. In Proceedings of the 10th Annual ACM/IEEE International Conference on Human-robot Interaction. 59–66.
    [5]
    Mark Coeckelbergh, Cristina Pop, Ramona Simut, Andreea Peca, Sebastian Pintea, Daniel David, and Bram Vanderborght. 2016. A survey of expectations about the role of robots in robot-assisted therapy for children with ASD: Ethical acceptability, trust, sociability, appearance, and attachment. Sci. Eng. Ethics 22, 1 (2016), 47–65.
    [6]
    Malene F. Damholdt, Marco Nørskov, Ryuji Yamazaki, Raul Hakli, Catharina Vesterager Hansen, Christina Vestergaard, and Johanna Seibt. 2015. Attitudinal change in elderly citizens toward social robots: the role of personality traits and beliefs about robot functionality. Front. Psychol. 6 (2015), 1701.
    [7]
    Edward L. Deci and Richard M. Ryan. 1987. The support of autonomy and the control of behavior. J. Personal. Soc. Psychol. 53, 6 (1987), 1024.
    [8]
    Nicholas Epley, Adam Waytz, and John T. Cacioppo. 2007. On seeing human: A three-factor theory of anthropomorphism. Psychol. Rev. 114, 4 (2007), 864.
    [9]
    Leon Festinger. 1962. A Theory of Cognitive Dissonance. Vol. 2. Stanford University Press.
    [10]
    Bertram Gawronski, Kurt Peters, and Fritz Strack. 2008. Cross-cultural differences vs. Universality in cognitive dissonance: A conceptual reanalysis. Handbook of motivation and cognition across cultures, 297–314
    [11]
    Aimi Shazwani Ghazali, Jaap Ham, Emilia Barakova, and Panos Markopoulos. 2018. The influence of social cues in persuasive social robots on psychological reactance and compliance. Comput. Hum. Behav. 87 (2018), 58–65.
    [12]
    Jaap Ham and Cees Midden. 2009. A robot that says “bad!”: Using negative and positive social feedback from a robotic agent to save energy. In Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI’09). 265–266.
    [13]
    Steven J. Heine, Darrin R. Lehman, Hazel Rose Markus, and Shinobu Kitayama. 1999. Is there a universal need for positive self-regard? Psychol. Rev. 106, 4 (1999), 766.
    [14]
    Etsuko Hoshino-Browne, Adam S. Zanna, Steven J. Spencer, Mark P. Zanna, Shinobu Kitayama, and Sandra Lackenbauer. 2005. On the cultural guises of cognitive dissonance: The case of Easterners and Westerners. J. Personal. Soc. Psychol. 89, 3 (2005), 294.
    [15]
    Elizabeth E. Joh. 2016. Policing police robots. UCLA L. Rev. Disc. 64 (2016), 516.
    [16]
    Takayuki Kanda, Masahiro Shiomi, Zenta Miyashita, Hiroshi Ishiguro, and Norihiro Hagita. 2009. An affective guide robot in a shopping mall. In Proceedings of the 4th ACM/IEEE International Conference on Human-robot Interaction. ACM, 173–180.
    [17]
    Garry Kasparov. 2017. Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. Hachette UK.
    [18]
    Min Kyung Lee, Sara Kiesler, and Jodi Forlizzi. 2011. Mining behavioral economics to design persuasive technology for healthy choices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 325–334.
    [19]
    Daniel T. Levin, Caroline Harriott, Natalie A. Paul, Tao Zhang, and Julie A. Adams. 2013. Cognitive dissonance as a measure of reactions to human-robot interaction. J. Hum.-robot Interact. 2, 3 (Sept. 2013), 3–17. DOI: https://doi.org/10.5898/JHRI.2.3.Levin
    [20]
    Dan Leyzberg, Eleanor Avrunin, Jenny Liu, and Brian Scassellati. 2011. Robots that express emotion elicit better human teaching. In Proceedings of the 6th International Conference on Human-robot Interaction (HRI’11). ACM, New York, NY, 347–354. DOI: https://doi.org/10.1145/1957656.1957789
    [21]
    Rosemarijn Looije, Mark A. Neerincx, and Fokie Cnossen. 2010. Persuasive robotic assistant for health self-management of older adults: Design and evaluation of social behaviors. Int. J. Hum.-comput. Stud. 68, 6 (2010), 386–397.
    [22]
    Gale M. Lucas, Jonathan Gratch, Aisha King, and Louis-Philippe Morency. 2014. It’s only a computer: Virtual humans increase willingness to disclose. Comput. Hum. Behav. 37 (2014), 94–100.
    [23]
    Hazel R. Markus and Shinobu Kitayama. 1991. Culture and the self: Implications for cognition, emotion, and motivation. Psychol. Rev. 98, 2 (1991), 224.
    [24]
    Kazuki Mizumaru, Satoru Satake, Takayuki Kanda, and Tetsuo Ono. 2019. Stop doing it! Approaching strategy for a robot to admonish pedestrians. In Proceedings of the 14th ACM/IEEE International Conference on Human-robot Interaction (HRI’19). IEEE, 449–457.
    [25]
    Masahiro Mori, Karl F. MacDorman, and Norri Kageki. 2012. The uncanny valley [from the field]. IEEE Robot. Automat. Mag. 19, 2 (2012), 98–100.
    [26]
    Daichi Morimoto, Jani Even, and Takayuki Kanda. 2020. Can a robot handle customers with unreasonable complaints? In Proceedings of the ACM/IEEE International Conference on Human-robot Interaction. 579–587.
    [27]
    Omar Mubin, Catherine J. Stevens, Suleman Shahid, Abdullah Al Mahmud, and Jian-Jie Dong. 2013. A review of the applicability of robots in education. J. Technol. Educ. Learn. 1, 209-0015 (2013), 13.
    [28]
    Elvira Nica. 2018. Will robots take the jobs of human workers? Disruptive technologies that may bring about jobless growth and enduring mass unemployment. Psychosociol. Issues Hum. Resour. Manag. 6, 2 (2018), 56–61.
    [29]
    Takuma Nishimura and Shigeo Sakurai. 2017. Longitudinal changes in academic motivation in Japan: Self-determination theory and East Asian cultures. J. Appl. Devel. Psychol. 48 (2017), 42–48.
    [30]
    Eunil Park, Ki Joon Kim, and Angel P. del Pobil. 2011. The effects of a robot instructor’s positive vs. negative feedbacks on attraction and acceptance towards the robot in classroom. In Social Robotics, Bilge Mutlu, Christoph Bartneck, Jaap Ham, Vanessa Evers, and Takayuki Kanda (Eds.). Springer Berlin, 135–141.
    [31]
    Irene Rae, Leila Takayama, and Bilge Mutlu. 2013. The influence of height in robot-mediated communication. In Proceedings of the 8th ACM/IEEE International Conference on Human-robot Interaction (HRI’13). IEEE Press, Piscataway, NJ. Retrieved from http://dl.acm.org/citation.cfm?id=2447556.2447558.
    [32]
    Irene Rae, Leila Takayama, and Bilge Mutlu. 2013. The influence of height in robot-mediated communication. In Proceedings of the 8th ACM/IEEE International Conference on Human-robot Interaction (HRI’13). IEEE, 1–8.
    [33]
    Maike A. J. Roubroeks, Jaap R. C. Ham, and Cees J. H. Midden. 2010. The dominant robot: Threatening robots cause psychological reactance, especially when they have incongruent goals. In Proceedings of the International Conference on Persuasive Technology. Springer, 174–184.
    [34]
    Sebastian Schneider and Franz Kümmert. 2016. Exercising with a humanoid companion is more effective than exercising alone. In Proceedings of the IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids). IEEE, 495–501.
    [35]
    Sebastian Schneider and Franz Kummert. 2016. Motivational effects of acknowledging feedback from a socially assistive robot. In Proceedings of the International Conference on Social Robotics. Springer, 870–879.
    [36]
    Sebastian Schneider and Franz Kummert. 2021. Comparing robot and human guided personalization: Adaptive exercise robots are perceived as more competent and trustworthy. Int. J. Soc. Robot. 13, 2 (2021), 169–185.
    [37]
    Chao Shi, Satoru Satake, Takayuki Kanda, and Hiroshi Ishiguro. 2018. A robot that distributes flyers to pedestrians in a shopping mall. Int. J. Soc. Robot. 10, 4 (2018), 421–437.
    [38]
    Megan Strait, Cody Canning, and Matthias Scheutz. 2014. Let me tell you! Investigating the effects of robot communication strategies in advice-giving situations based on robot appearance, interaction modality and distance. In Proceedings of the ACM/IEEE International Conference on Human-robot Interaction. 479–486.
    [39]
    World Economic Forum (WEF) (Ed.). 2018. The future of jobs report 2018. Retrieved from http://www3.weforum.org/docs/WEF_Future_of_Jobs_2018.pdf.
    [40]
    J. Xu, J. Broekens, K. Hindriks, and M. A. Neerincx. 2015. Effects of a robotic storyteller’s moody gestures on storytelling perception. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction (ACII). 449–455. DOI: https://doi.org/10.1109/ACII.2015.7344609
    [41]
    Sangseok You, Jiaqi Nie, Kiseul Suh, and S. Shyam Sundar. 2011. When the robot criticizes you...: Self-serving bias in human-robot interaction. In Proceedings of the 6th International Conference on Human-Robot Interaction (HRI’11). Association for Computing Machinery, New York, NY, 295–296. DOI: https://doi.org/10.1145/1957656.1957778
    [42]
    Khaoula Youssef, Jaap Ham, and Michio Okada. 2016. Investigating the differences in effects of the persuasive message’s timing during science learning to overcome the cognitive dissonance. In Proceedings of the International Conference on Social Robotics. Springer, 104–114.

    Cited By

    View all
    • (2024)Co-design Accessible Public Robots: Insights from People with Mobility Disability, Robotic Practitioners and Their CollaborationsProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642875(1-20)Online publication date: 11-May-2024
    • (2024)I Need to Pass Through! Understandable Robot Behavior for Passing Interaction in Narrow EnvironmentProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634951(213-221)Online publication date: 11-Mar-2024
    • (2023)"Would I Feel More Secure With a Robot?": Understanding Perceptions of Security Robots in Public SpacesProceedings of the ACM on Human-Computer Interaction10.1145/36101717:CSCW2(1-34)Online publication date: 4-Oct-2023

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Human-Robot Interaction
    ACM Transactions on Human-Robot Interaction  Volume 11, Issue 2
    June 2022
    308 pages
    EISSN:2573-9522
    DOI:10.1145/3505213
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 February 2022
    Accepted: 01 July 2021
    Revised: 01 May 2021
    Received: 01 July 2020
    Published in THRI Volume 11, Issue 2

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Social robots
    2. cognitive dissonance theory
    3. admonishment
    4. trivialization

    Qualifiers

    • Research-article
    • Refereed

    Funding Sources

    • JST CREST

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)673
    • Downloads (Last 6 weeks)123
    Reflects downloads up to 11 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Co-design Accessible Public Robots: Insights from People with Mobility Disability, Robotic Practitioners and Their CollaborationsProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642875(1-20)Online publication date: 11-May-2024
    • (2024)I Need to Pass Through! Understandable Robot Behavior for Passing Interaction in Narrow EnvironmentProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634951(213-221)Online publication date: 11-Mar-2024
    • (2023)"Would I Feel More Secure With a Robot?": Understanding Perceptions of Security Robots in Public SpacesProceedings of the ACM on Human-Computer Interaction10.1145/36101717:CSCW2(1-34)Online publication date: 4-Oct-2023
    • (2023)Keeping Up, Staying in Touch, Getting OnPolicing Distracted Driving10.1007/978-3-031-43658-1_3(55-78)Online publication date: 19-Nov-2023

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media