3.6.1 Thrust Area 1: The Human in Human–Security Robot Interaction.
In this review, 38.3% of studies examined human factors, which include demographics such as gender and age, attitude, mental model, personality, human obedience behavior, and guilt status. These studies focused on perceptions of robots (8 studies), perceptional robot acceptance (11 studies), and behavioral robot acceptance (2 studies).
Perception of Robots. Eight studies investigated the influence of human factors on perceptions of robots. Four examined the influence of gender on attitudes and images of security robots, finding mixed results [
26,
40,
41,
103]. Three studies examined participants’ gender and did not find a significant effect of human gender on perceived threat [
40,
41], fairness [
40,
41], functionality [
40,
41], correctness [
40,
41], or affective evaluations [
26] of security robots. However, one study found that women perceived the adoption of domestic robots, which perform household chores and personal security tasks, as a much riskier proposition than men did [
103].
Two studies examined age, both finding non-significant results [
26,
40]. One study examined the effect of age and did not find any significant effect on the security robots’ image, including perceived threat, fairness, functionality, or correctness [
40]. Another study compared people’s positive evaluations of security robots between age groups younger than 25 and those older than 25 and again did not find significant differences [
26].
Two studies investigated participants’ perceptions of security robots and their compliance with the robot [
4,
5]. In one study, participants who complied with the robot’s directives perceived it as safer and somewhat less assertive when compared to non-compliant participants [
4]. Additionally, the same study found that obedient individuals attributed higher anthropomorphic qualities to the security robot and felt it was marginally less aggressive, albeit insignificantly. A similar study examined these relationships again and found that individuals obedient to the security robot rated it as less aggressive and safer, with higher perceived intelligence, higher anthropomorphism, higher sense of responsibility, and more comfort compared to those who disobeyed the robot’s instructions [
5]. Although all these trends were non-significant, they aligned with previous findings.
Only one study investigated the influence of individuals’ technical background on their perception of security robots [
26]. This research uncovered that individuals with technical expertise hold more favorable opinions of security robots than those without.
Mental models were only examined in one study [
70]. Participants with the security guard task-role mental model had a better interpretation of robot security guard behaviors’ intent than participants with no such prescribed mental model. Also, participants with the security mental model perceived non-security behavior less accurately, which indicates the importance of matching mental models. The correct mental model was also positively associated with participants’ perceived robot intelligence and safety.
The impact of the guilt status of human suspects on people’s perceptions of robots was also examined [
46]. Study participants were separated into two cohorts: one comprising individuals undergoing robot-led interrogations and the other of observers witnessing these interactions. Participants in both roles evaluated the robot’s characteristics. Subjects designated as “suspects” were further categorized as “guilty” or “innocent” and then interacted with the robotic interrogators. The results indicated that suspects’ perceptions of the robot’s exertion, pressure, anxiety induction, and friendliness were the same, regardless of their assigned guilt status. However, observers felt that robots appeared to apply more effort and pressure during interrogations when suspects were presumed guilty rather than innocent.
Key takeaways:
(1)
Although gender does not significantly affect overall perceptions of security robots, women view the adoption of domestic robots as riskier than men do, indicating complex gender attitudes toward different robot types.
(2)
Age does not seem to explain perceptions of security robots, indicating that other factors may be more influential in shaping attitudes toward these technologies.
(3)
Compliant individuals viewed security robots as safer and less aggressive and associated them with higher perceived intelligence and comfort, though these findings were not statistically significant.
(4)
Users with a pre-existing security guard mental model interpret robot behaviors better and perceive them as being more intelligent and safer.
Robot Acceptance. Thirteen studies examined the impact of human-related factors on robot acceptance, with two measuring behavioral robot acceptance and 11 measuring perceptional robot acceptance.
1. Behavioral Acceptance. Two studies in thrust area 1 examined behavioral robot acceptance. Human obedience behavior toward security robots was examined through interviews aimed at understanding causes of non-compliance [
81]. These researchers found that cognitive dissonance reduction explained why many individuals ignored the warnings issued by the robots. Many participants justified their non-compliance by downplaying the importance of the robot’s instructions, considering them trivial. Others adjusted their reasoning, either enhancing the significance of their actions or attributing non-compliance to the reason that few people were around. The influence of a suspect’s presumed guilt status on denial behavior was also examined [
46]. The study found that the guilt status of the suspects did not significantly affect their self-evaluation of denial behavior, including anxiety, defensiveness, friendliness, or forcefulness. Additionally, observers’ judgments of guilt were unaffected by the suspects’ actual guilt or innocence. Further, there was no notable difference in observers’ assessments of suspects’ denial behaviors in terms of defensiveness or anxiety, and observers noted that suspects tended to deny more vehemently when they were innocent rather than guilty.
Key takeaways:
(1)
Cognitive dissonance explains non-compliance with security robots’ instructions because users often trivialize warnings or justify ignoring the robots’ directives.
(2)
The perceived guilt status of suspects does not reliably predict human responses or interactions with security robots.
2. Perceptional Acceptance. Eleven studies explored the impact of human factors on perceptional acceptance. Six studies examined the effect of gender and acceptance, with three finding evidence that women prefer and trust security robots more than men [
14,
26,
34,
63,
78,
111]. One study found that men and women differ significantly in their reliance intentions and trustworthiness toward security robots [
34]. Women were found to have higher reliance intentions on robots and higher perceived trustworthiness than men. Additionally, women were more likely to use security robots in hospitals and on college campuses than men. However, no differences were found between men’s and women’s intentions to use security robots in other public or in military settings. Another study investigated the home service robot and found that women participants emphasized security and safety functions more than men did [
111]. A separate study utilized interviews to explore public opinions regarding security robots [
63]. This study found that women are more inclined to accept these robots, potentially because they feel safer with them than with men. Additionally, women perceived security robots as posing a lower risk of violence than human security.
However, three studies found no difference between male and female participants regarding their acceptance of security robots. One study did not find a significant result on either trust or perceived competency [
14]. Similarly, another study found that gender did not significantly impact occupational competency and trust of the security robots [
78]. Another study investigated the impact of gender on people’s expectations regarding the time for security robots and related scenarios to become a reality in the future [
26]; no significant differences were observed in the expectations of male and female participants regarding this time.
The influence of human age on acceptance was investigated in two studies, yielding conflicting results [
26,
83]. One study revealed that confidence in security robot capabilities varies across human age groups [
83]. Specifically, awareness of technology-enabled home safety control is generally higher among adults, while it tends to be lower in younger and older demographics. Notably, a quarter of elderly participants perceived home safety control as an unattainable task for robots. Another study examined the impact of age on anticipation for the future deployment timeline of security robots between individuals younger and older than 25 and found nonsignificant results [
26].
One study examined the impact of individuals’ technical backgrounds on their expectations and time frame for implementing security robots in the real world [
26]. This research revealed that, compared to non-technical individuals, those with a technical background are more optimistic, believing that the deployment of security robots will occur in a shorter time frame.
Two studies explored the relationship between attitude toward robots and trust [
50,
64]. One study found that attitudes were significantly associated with trust [
64]. These findings align with another study, which demonstrated a notable positive correlation between attitude and trustworthiness [
50]. Also, trustworthiness (ability, benevolence) was found to have a significant effect on trust, indicating that trustworthiness would affect the trust model [
50]. Apart from that, this study examined the relationship between human likeness and trustworthiness and the relationship between positive affect and trustworthiness and all did not find significant results.
Only one study examined the influence of personality and
perfect automation schema (PAS) and found they are significant predictors of trust and use intentions [
61]. Personalities are assessed based on five dimensions: agreeableness, conscientiousness, extraversion, neuroticism, and intellect/imagination. On the other hand, PAS captures individuals’ attitudes and expectations toward advanced technology, encompassing two dimensions: high expectations and all-or-none beliefs. High expectations refer to people’s high expectations toward technology, while all-or-none beliefs indicate that individuals view technology as either perfectly functioning or completely broken. The results found that agreeableness, intellect/imagination, high expectations, and all-or-none beliefs significantly correlate with trust. Extraversion, agreeableness, intellect/imagination, and high expectations had significant correlations with public and military use intentions. Agreeableness and high expectations were associated with higher trust and a greater public and military desire to use. The trait intellect/imagination was associated with lower trust and desire to use. Extraversion was associated with a greater desire to use, and all-or-none beliefs were associated with lower trust [
61].
Two other studies looked at different variables [
70,
78]. One of these examined the impact of mental models and found a significant positive association between having the correct mental model and trustworthiness, while no significant association was observed with robot power [
70]. Another study examined the influence of education level and people’s comfort with interacting with new robots [
78]. This study found that a higher comfort level of participants leads to greater trust, perceived occupational competency of security robots, and a stronger preference for security robots over male or female human agents. Also, education levels significantly impact people’s trust in security robots and their degree of preference for security robots over human female agents, although the direction of this impact was not reported [
70]. However, this study did not find an impact of education levels on people’s perception of occupational competency or their degree of preference for robots over human male agents.
Key takeaways:
(1)
Mixed results were found regarding the impact of human gender on security robot acceptance, while all significant results suggest that women show greater trust and intention to use security robots than men.
(2)
The impact of age on acceptance is unclear, with the literature finding mixed results.
(3)
Familiarity with technology is associated with more optimistic expectations about deploying security robots.
(4)
Specific personality traits, particularly agreeableness and intellect/imagination, strongly predict trust and intentions to use security robots.
3.6.2 Thrust Area 2: The Robot in Human–Security Robot Interaction.
In this review, 55.3% of studies examined robot factors, which include the robots’ gender, physical and non-physical design, physical and non-physical behavior, reliability, and presence. They focused on perceptional robot acceptance (17 studies), behavioral robot acceptance (5 studies), perceptions of robots (9 studies), and user performance (1 study).
Human Performance. One paper investigated the influence of robot factors on user performance. This study assessed police officers’ use of security drones for operational tasks and found that the use of drones decreased completion time and reduced the number of targets overlooked compared to not using drones [
86]. The study assessed situational awareness among participants and identified an enhancement in information quality with drone deployment while the overall mental workload remained consistent. Notably, temporal demand decreased with drone use. This study also contrasted the cognitive demands of operating a single drone on one monitor versus controlling a swarm of drones on single or multiple monitors. The findings indicated an increased mental workload when managing multiple drones, irrespective of the monitor setup [
86]. Single-monitor configurations for multiple drones led to perceptions of greater time pressure than using multiple monitors. Despite no significant variations in participants’ stress or insecurity levels, the complexity was perceived to be higher when supervising a multi-monitor drone swarm compared to a single drone operation.
Key takeaways:
(1)
The use of security drones improves performance and information quality without increasing the overall mental workload.
(2)
Managing a swarm of security drones, particularly on a single monitor, increases perceived complexity and time pressure compared to operating a single drone.
Perception of Robots. Nine studies investigated the influence of robot-related factors on human perception of the robot. Four papers [
54–
56,
108] examined robot appearance, with three specifically investigating the impact of anthropomorphic design [
54,
55,
108], yielding similar results when it comes to likability but conflicting results when it comes to safety. One study manipulated different robot types (anthropomorphic, zoomorphic, machine-like) to examine the interaction between a robot’s appearance and task, finding no significant results in perceived likability [
54]. Another study examined the direct influence of anthropomorphism on the perceived likability, intelligence, and safety of security robots and whether this relationship was moderated by the interaction scenario. However, all relationships were found to be non-significant [
108]. In contrast, another study explored the impact of anthropomorphism on home security robots, discovering that robots with humanlike features were rated as significantly less physically safe and of lower quality compared to robots without humanlike features [
55].
Another study examined the impact of adding a university logo or flashing lights on a COVID-19 security robot [
56]. The university logo or flashing lights increased perceptions of the robot’s authoritativeness, with the logo perceived as more authoritative than the flashing lights [
56]. However, the security robot design that did not include a logo or flashing lights was perceived as friendlier and less aggressive. On the contrary, there was no difference in perceptions of the robot’s innovativeness, inviting nature, reliability, professionalism, or elegance between robots with and without the logo and flashing lights.
The impact of the politeness of robots was examined in two studies, both reporting significant effects [
40,
41]. Polite security robots were perceived by humans as friendlier, fairer, and displaying more appropriate behavior [
41]. Conversely, impolite security robots were perceived as more threatening and unfair, with lower perceived functionality and correctness compared to polite counterparts [
40].
The gender, personality, behavior, and interrogation approaches of robots have also been individually examined. Security robots gendered as male were seen as having higher affective evaluations, attitudes (marginally), cognitive evaluations (marginally), and subjective norms than the female-gendered security robots [
88]. Introverted security robots were seen as having more positive affective and cognitive evaluations and greater subjective norms (marginally) compared to extraverted robots [
88]. Security robots that used body movements to convey messages were seen as more aggressive than robots that carried a signboard to convey messages [
4]. Finally, one study [
46] investigated suspects’ and observers’ perceptions of interrogator robots employing either innocent-presumptive or guilt-presumptive interrogation approaches. The results indicated that human suspects did not perceive the robots using different interrogation approaches as exerting varying levels of effort to elicit confessions, nor did they perceive any differences in their friendliness. However, there was a marginally significant difference in their perceived pressure of the robot [
46]. Additionally, there was a significant difference in the perceived anxiety of the robot, with those using innocent presumptions appearing less anxious. As for the observers’ perceptions, their predictions of the robot’s judgment depended on the interrogation approach. They perceived that the robot tried harder and applied more pressure to obtain a confession under guilt-presumptive assumptions.
Key takeaways:
(1)
Anthropomorphic robots are perceived as less safe than non-anthropomorphic designs, but there is no difference in likability.
(2)
Design features can influence perceptions of a robot’s authority and approachability but do not affect perceptions of innovativeness, reliability, or professionalism.
(3)
Robot politeness strongly influences people’s perceptions of security robots, with polite security robots being viewed more favorably than impolite security robots.
(4)
Male-gendered security robots receive higher affective evaluations than female-gendered ones. Additionally, introverted robots are perceived more positively than extraverted ones.
Robot Acceptance. Twenty-one studies investigated the influence of robot-related factors on humans’ acceptance of robots, with five measuring behavior and 17 measuring perception.
1. Behavioral Acceptance. Five papers investigated the influence of robot factors on behavioral robot acceptance [
12,
46,
54,
56,
67]. A security robot’s design features were examined in two studies [
54,
56]: one concentrated on anthropomorphic design [
54], while the other examined the impact of flashing lights [
56]. The robot’s appearance (anthropomorphic, zoomorphic, or mechanical) along with its assigned role (security or tour guide) did not significantly influence participants’ engagement or active response to the robots [
54]. However, red and blue flashing lights did increase participants’ engagement, although it did not influence their tolerance for completing the interaction session [
56]. A robot’s presence also increased compliance among passersby, who were more inclined to wear their masks as requested when interacting with or noticing the robot [
56].
Three studies investigated the impact of a security robot’s behavior on human behavior. One study revealed that admonishing pedestrians significantly increased the success rate of halting inappropriate behaviors, such as phone use while walking, compared to being more friendly [
67]. However, it was noted that admonishing pedestrians was not effective in persuading them about the wrongness of their actions; instead, they stopped primarily due to the surprise of the initial encounter. Another study examined the impact of arming the security robots with a lethal or non-lethal weapon [
12]. The findings revealed that participants were more inclined to comply with robotic peacekeepers when they were equipped with lethal backup weapons compared to non-lethal ones. Participants also decided to comply with a robot more slowly when it was guarding a checkpoint equipped with a lethal weapon but made the decision most rapidly when it was guarding a checkpoint without a lethal weapon. One study also investigated the impact of robot interrogation approaches [
46]. In this study, researchers recruited two groups of participants; one was assigned the role of human suspects interacting with an interrogator robot, and the other was assigned as an observer watching the interaction. The robot used an interrogation approach that was either innocent-presumptive or guilt-presumptive. The innocent-presumptive approach involved the robot asking questions as if the suspect were innocent and did not commit the crime. In contrast, the guilt-presumptive approach involved asking questions assuming the suspect was guilty. The results indicated that human suspects tended to be more friendly toward robot interrogators when the approach was innocent-presumptive rather than guilt-presumptive. Also, human observers rated the suspects’ denial behavior as more defensive and more vehement in their denials when faced with guilt-presumptive interrogators, but no differences were observed in terms of anxiety. However, human suspects believed their denial behavior did not differ in terms of anxiety, defensiveness, or forcefulness. The observers’ judgments of suspects’ guilt were also found to be independent of the robot interrogator’s approach.
Key takeaways:
(1)
Visual design features can affect interaction dynamics with security robots.
(2)
The presence of a security robot can increase compliance.
(3)
Admonishing behavior from security robots effectively increases compliance.
(4)
People are more inclined to comply with security robots equipped with lethal weapons than non-lethal weapons.
(5)
Suspects are friendlier toward security robots using an innocent-presumptive interrogation approach as opposed to a guilt-presumptive interrogation approach.
2. Perceptional Acceptance. Seventeen studies investigated various factors related to robots and their impact on perceptional acceptance. The influence of gendering security robots on their acceptance was investigated in four studies [
14,
78,
88,
89]. Half of these studies reported a preference for male security robots over female ones, while the remaining studies found no discernible differences in acceptance based on gender. In the studies that found differences, male-gendered security robots, manipulated through voice and names, were seen as more useful [
89] and marginally easier to use [
89], and they had greater perceived behavioral control (marginally) [
89] than female-gendered security robots, explaining why participants also reported higher intention to use male security robots over female security robots [
88,
89]. In contrast, two other studies indicated that the robot’s gender had no effect on various measures of trust and preference compared to human security personnel [
14,
78].
Three studies examined the impact of robot anthropomorphic design, with one supporting the claim that anthropomorphic design enhances security robot acceptance [
108], while the other two did not [
35,
54]. The anthropomorphic appearance of a security robot was positively linked to trustworthiness and intention to use [
108]. However, contrasting findings suggest that the anthropomorphic appearance has no significant impact on trust [
54,
108] or satisfaction [
54] and is negatively associated with the preference for robot [
35]. This discrepancy in findings may be attributed to potential moderators. For instance, the significance of a robot’s anthropomorphic appearance was found to vary depending on the social demand of the role. Humans tended to prefer more human-like robots for socially demanding tasks, while they favored machine-like robots for less socially demanding tasks [
35]. The discrepancy in findings may also be attributed to variations in the definition of humanlike and machine-like robots—we note that each study operationalized anthropomorphism somewhat differently. Besides these studies, one study [
13] investigated law enforcement officers’ acceptance of a communicative security robot, finding a high level of trust among the officers. They interviewed participants to identify design factors contributing to trust and found that the robot’s anthropomorphic appearance was unimportant for nearly half of the officers. Instead, participants highlighted factors such as size, voice, volume, emotion expression capability, battery life, and camera views as important considerations of reliability.
Two studies investigated whether the type of threat detection employed by a security robot influenced whether its recommendation was accepted, finding mixed or conflicting results [
57,
64]. The two types of threat analyses examined were physical-based analysis and psychology-based analysis. The physical-based analysis involves the robot discerning threats through direct physical cues, like detecting chemicals or identifying weapon-shaped objects in X-ray images. Conversely, psychology-based analysis entails the robot interpreting human intentions, utilizing information such as facial expressions and eye movements. One study used text-based scenarios and observed a significant impact on trust; participants exhibited higher confidence in the robot’s physics-based analysis and were more inclined to follow its recommendations than those from the psychological-based analysis [
64]. Conversely, the other study used a virtual environment with the same type of threat detection but found no significant main effect on trust [
57].
Two studies examined the impact of a robot’s expressed social intent on both trustworthiness and trust, yielding conclusive results for trustworthiness but mixed findings for trust [
60,
62]. In both studies, the security robot informed participants that its social intent was to protect the human visitors, to protect the building occupants, to be self-protective, or to be self-sacrificial. Self-protective means that it would prioritize protecting the robot itself; protecting visitors means maximizing protection and well-being for visitors; protecting occupants means maximizing protection for personnel within the secure area; and protecting visitors with self-sacrifice means prioritizing the safety of the visitors over the robot, even if the robot were to be destroyed. One study found that the social intent of a security robot had a significant impact on people’s perceived integrity and benevolence but not on trust or perceived ability [
62]. The other study found a significant impact on perceived integrity, benevolence, and trust but not on perceived ability or the desire to use [
60]. The condition of robot self-sacrifice was found to be associated with higher perceived integrity and benevolence in both of these studies [
60,
62] and higher trust in one study [
60]. The interaction between robot autonomy and stated social intent was also found [
60]. When the robot was intended to protect occupants, participants’ perceived ability and integrity were higher when it had low autonomy than when it had high autonomy. However, when the robot’s intent was self-sacrifice, participants’ perceived ability and integrity were higher when the robot had high autonomy than when it had low.
The impact of security robot autonomy was examined in two studies: a quantitative study finding non-significant effects [
60] and a qualitative study highlighting serious concerns with autonomy [
63]. In the quantitative study, the degree of autonomy did not impact the participants’ trust, trustworthiness, or intention to use [
60]. However, the other study was conducted with semi-structured interviews to understand public perceptions of security robots, and the robots’ level of autonomy was identified as a major concern [
63]. Participants expressed concerns about the possibility of security robots being hacked or hijacked.
Beyond that, robots’ presence, reliability, personality, directive communication style, defensive behavior, and weapon utilization were all examined in different studies. A semi-structured interview study to understand the public’s opinions of security robots discovered that participants believed the one significant benefit of these robots was their ability to deter crime merely through their presence [
63]. Another study examined the effect of security robots’ reliability and found it has a significant influence on trust and trustworthiness [
62]. Specifically, when the security robot accurately denied access to unauthorized individuals, participants trusted it more and thought it had higher ability and integrity. Another study examined the effect of security robots’ personalities on acceptance and found that introverted security robots have higher perceived trust, perceived behavioral control, and acceptance compared to extraverted robots [
88].
The impacts of communication style and trust elements were explored in peacekeeping robots [
58]. Trust elements include emotion, behavior, and cognition. Participants showed higher trust in robots using the analytic directive style over the comparative style. Also, participants reported higher trust for emotional-based appeals than behavioral-based and cognitive-based appeals.
A security robot’s defensive behavior on human acceptance was also examined [
24]. This study found that the method of defense significantly influences acceptability, with less forceful approaches being preferable. Specifically, blocking was perceived as more acceptable than non-lethal force, and both were deemed more acceptable than lethal force. Additionally, the use of non-lethal defense was considered more acceptable in response to lethal attacks than to non-lethal attacks. Law enforcement officers’ interaction with drone swarms was also examined [
86]. This study found no significant difference in officers’ trust when comparing the use of a single monitor for one drone to that of multiple monitors or a single monitor controlling a swarm of drones. Finally, a qualitative study examined the acceptance of weapon utilization [
87]. These researchers conducted a qualitative study asking participants to envision their ideal domestic robots. The results revealed that one of the popular robots was a house-sitting robot capable of overseeing a home to maintain order and security. Participants imagined a robot that could monitor their physical property and patrol both inside and outside the house. They envisioned a systematic collaboration between mobile robots and in-house surveillance systems as the preferred operational mode. Interestingly, six households specified wanting a security robot but without the risk of it being armed. They preferred features like a loud alarm or the ability to contact security agents. Contrarily, one participant desired a robot that normally roamed the house for cleaning but could deploy a weapon if the security sensors detected abnormal situations [
87].
Key takeaways:
(1)
The impact of security robot gender was mixed: some studies found no significant differences, while others indicated that male robots are viewed more favorably.
(2)
Preference for human-like robots is higher for socially demanding tasks, suggesting that the role and social context are crucial in determining the effectiveness of anthropomorphic design.
(3)
The expressed social intent of security robots significantly influences perceptions of integrity and benevolence.
(4)
Autonomy in security robots may not reduce trust outright; it raises significant safety and ethical concerns that must be addressed.
(5)
People show higher trust in highly reliable security robots than less reliable ones, trust introverted security robots over extraverted ones, and prefer those that use less forceful defense approaches.
3.6.3 Thrust Area 3: The Contextual Factors in Human–Security Robot Interaction.
In this review, 38.3% of studies directly looked into the interaction and contextual factors, including robot tasks, security agent types, interaction contexts and usage, and cultural backgrounds. These studies focused on perceptional robot acceptance (13 studies), behavioral robot acceptance (6 studies), and perceptions of robots (6 studies).
Perception of Robots. Six studies investigated whether contextual factors influence perceptions of security robots. Two studies found that national cultural differences can be important [
48,
53]. National culture significantly influences people’s overall attitudes regarding security robots’ use of weapons [
48]. Specifically, Chinese individuals living in China displayed a more approving attitude toward weapons than Americans residing in China. Another study examined national culture by adopting a generative design methodology and conducting semi-structured interviews [
53]. This study found that participants from Korea and the United States had different perceptions of how security robots should be used. US participants expected security robots to be part of a house security system and were fine with allowing them to use a weapon. Korean participants only expected robots to perform security tasks for children rather than the house. Also, Korean participants expected the appearance of the security robot to be more friendly for guarding children, while US participants expected security robots to be more threatening and machine-like.
The impact of agent type (robot vs. human) was examined in two studies and yielded mixed results [
20,
41]. One study found that a security task performed by a robot was perceived as significantly more intentional, less surprising, and more desirable than when performed by a human security officer [
20]. Conversely, another study found no differences between human security officers and security robots regarding intimidation, fairness, and friendliness [
41]. Another study [
93] examined the influence of robot tasks and found that there is no significant difference in participants’ attitudes between robots performing security tasks or guidance tasks but that people would perceive significantly higher attribution of masculine traits to the security robot and higher attribution of feminine traits to the guidance robot. Finally, one study [
108] examined the impact of interaction scenarios and found no significant differences in the perceived likability, intelligence, and safety of the robots between indoor and outdoor scenarios.
Key takeaways:
(1)
National culture significantly influences attitudes toward security robots, particularly regarding the use of weapons.
(2)
Mixed results were found concerning whether people’s perceptions of human security officers differ from those of security robots.
(3)
More masculine traits are attributed to security robots and more feminine traits to guidance robots.
(4)
There is little evidence that the context of interaction itself influences perceptions of security robots.
Robot Acceptance. Nineteen studies in thrust area 3 investigated the impact of contextual factors on people’s acceptance of robots, with six measuring behavioral acceptance and 13 measuring perceptual acceptance.
1. Behavioral Acceptance. The impact of contextual factors on behavioral robot acceptance was investigated in six studies. Two of them explored the impact of national culture and found that it affects human engagement and compliance [
12,
54]. One study investigated the influence of national culture on participant engagement with robots based on task sociability ranging from low sociability (security guard) and middle sociability (tour guide, entertainment), to high sociability (teaching) [
54]. Interestingly, Chinese and Korean participants demonstrated greater engagement with security robots when compared with German participants, although all participants engaged more with teaching robots and less with the security robot. The authors attributed these potential national cultural disparities to the collectivist nature of the Chinese and Koreans, which encourages them to be more receptive to communication and suggestions by either type of robot when compared to the Germans’ individualistic culture. Another study explored the influence of national culture on participants’ compliance rates with peacekeeping security robots [
12]. Notably, Chinese participants residing in the United States exhibited the highest compliance with the robot, whereas Americans living in China displayed the lowest. The authors attributed this to China’s stringent weapon regulations and US police shootings.
Three studies investigated the influence of agent type (human or robot) and robot task, and all found that human security officers are associated with better human performance [
48,
59,
94]. In one study, participants were more inclined to notice and interact with human guards than security robots, with the latter often being ignored [
94]. Surprisingly, participants maintained a greater distance from human security guards than robots; however, this closer proximity to robots may stem from the need to engage with their screens or from curiosity toward the robots. This study also found that humans are more likely to notice and interact with robots in security than guidance mode. Similarly, another study found that participant engagement was significantly higher when interacting with real human guards compared to security robots [
59]. These researchers also noted that participant engagement was significantly higher when security robots performed security protocol tasks (such as checking IDs) rather than greeting protocols. Another study explored the impact of robot tasks. This study found that participants’ compliance rates were significantly higher when robots requested them to relinquish a non-weapon item compared to weapons [
48]. Another study tested a counter-trivialization strategy for robots that applied a dissonance reduction strategy in the robot’s communication sentences [
81]. Unlike baseline robots, which repeat an admonishing sentence twice, robots using this counter-trivialization approach utilize a specific counter-trivialization message. Study results showed that this strategy significantly increases people’s compliance with the robot’s instructions.
Key takeaways:
(1)
National culture significantly influences human engagement and compliance with security robots.
(2)
People are likelier to notice, interact with, and maintain greater distances from human security guards than robot security guards.
(3)
Engagement with security robots varies based on the task performed by the security robot.
(4)
Implementing a counter-trivialization strategy in a robot’s communication significantly increases human compliance.
2. Perceptional Acceptance. In thrust area 3, 13 studies investigated perceptional robot acceptance outcomes. National culture was the most-explored topic, with six studies, and the majority reported significant effects [
11,
16,
47,
58,
78,
110]. Mixed results were found for three studies exploring the impact on trust. One study tested the effect of participants’ country of residence on trust in the robotic peacekeeper and their interpersonal trust [
58]. Interpersonal trust measures the trust between humans, while trust in the robotic peacekeeper is measured along three dimensions—purpose, process, and performance—and is focused on the trust between humans and robots. The results indicated that Americans residing in Japan exhibit significantly higher interpersonal trust than Americans living in the United States and China, possibly due to the high value placed on in-group trust in Japan [
58]. In contrast, regarding trust in robotic peacekeepers, Americans living in the United States displayed significantly higher trust than those residing in China, particularly in the performance and process dimensions. Furthermore, Americans in the United States showed significantly more trust in robotic peacekeepers than Americans living in Japan.
Another study examined peacekeeping robots and found a significant effect of culture [
11]. Americans living in all countries tended to trust security robots more than other cultural groups. According to the paper, this may be attributed to the widespread adoption of robotic technologies in American culture and industry and the individualistic tendencies in the United States, which might lead US participants to be more accepting of robotic technologies. Contrarily, the Japanese living in the United States have the lowest trust in security robots. The author pointed out a conjectural interpretation suggesting that Japanese participants view robots as more valuable as social partners than as actors in law enforcement [
11]. However, another study looked at the influence of race and did not find significant differences in occupational competency and trust [
78].
Three other studies explored the impact of culture on robot acceptance and preference, yielding mixed results [
16,
47,
110]. One study investigated the impact of culture and found differences in the preferred identity of the robot defender [
16]. Japanese participants favored human defenders over robot defenders, whereas US participants showed no significant preference for human over robot defenders. The study also revealed that US participants were more accepting of the idea of robots using lethal force compared to their Japanese counterparts, who preferred non-lethal and blocking defense strategies over other types of self-defense. However, the study did not find a difference in acceptance of robot defenders between United States and Japanese participants. Another study gathered people’s attitudes toward AI robots using a survey [
47] and found that Japanese university students are significantly more concerned about security robots than their Taiwanese counterparts and agree that robots should have security functions. Finally, one study surveyed people’s needs for home service robots [
110]. The results indicated that Taiwanese and Japanese participants believe home service robots should also function as security robots.
Five studies in thrust area 3 examined the influence of agent type by comparing security robots to human security officers, with three indicating greater acceptance for humans over robots [
24,
78,
86] and two indicating preference for robots over humans [
3,
25]. One study found that participants preferred hiring human guards over security robots [
78]. Another study reported that participants found it more acceptable for human security officers to use non-lethal force for human protection compared to humanoid robots [
24]. Additionally, the same study revealed that participants found it more acceptable for a humanoid robot to use force than an autonomous vehicle. Likewise, another study examined law enforcement officers’ trust in drones for surveillance operations and revealed a preference for human oversight rather than autonomous drone operation [
86].
The other two studies found a preference for robots over humans [
3,
25]. A study was conducted to understand the acceptance of a retail service robot by customers and retail service workers [
25]. The retail service robot was capable of providing friendly guidance and admonishing inappropriate customer behaviors. Most customers preferred the robot over the human when it came to begin admonished. Likewise, retail service workers preferred using the robot for admonishing customers because it was less offensive and easier for a robot to perform. Another study investigated students’ attitudes toward library robots tasked with providing directions, ensuring security, answering users’ questions, and monitoring activities [
3]. The results indicated that students viewed robots as more suitable for security and monitoring than librarians.
Three studies examined the impact of context on the acceptance of security robots and found mixed results [
57,
63,
108]. One study explored contextual factors and discovered a strong interaction between danger cues and robot decisions [
57]. In the study, scenes were categorized into low, medium, and high danger levels, with the robot tasked to assess the danger of each scene. Participants exhibited greater trust in robots when their decisions aligned with the perceived danger level of the scene, such as labeling low-danger scenes as safe and high-danger scenes as threatening. Similarly, another study identified barriers to security robot acceptance, which were highly context-dependent [
63]. Participants saw clear benefits in deploying security robots in high-security areas but were uneasy about their use in low-risk locations. Additionally, acceptance varied between day and night; participants believed robots could enhance safety for children during the day but were concerned about potential negative impacts on the unhoused population at night. Acceptance was also found to be “use-dependent” in that participants were amenable to the technology used for deterring bad behavior but opposed the collection of identifiable data that could lead to personal tracking and monitoring. Some participants also expressed fears about biased decision-making by these robots, such as the potential for unjust arrests of people of color due to surveillance [
63]. In contrast to these findings, another study examined the influence of interaction scenarios by comparing an indoor hallway scenario with an outdoor parking lot, revealing no significant differences in people’s trust, trustworthiness, and desire to use security robots [
108].
Key takeaways:
(1)
National culture significantly impacts the trust and acceptance of security robots.
(2)
Preferences between human security personnel and security robots varies depending on the context and role.
(3)
Contextual factors associated with different risk levels significantly influence the acceptance of security robots.