Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

A Human–Security Robot Interaction Literature Review

Published: 24 December 2024 Publication History

Abstract

As advances in robotics continue, security robots are increasingly integrated into public and private security, enhancing protection in locations such as streets, parks, and shopping malls. To be effective, security robots must interact with civilians and security personnel, underscoring the need to enhance our knowledge of their interactions with humans. To investigate this issue, the authors systematically reviewed 47 studies on human interaction with security robots, covering 2003 to 2023. Papers in this domain have significantly increased over the last 7 years. The article provides three contributions. First, it comprehensively summarizes existing literature on human interaction with security robots. Second, it employs the Human–Robot Integrative Framework (HRIF) to categorize this literature into three main thrusts: human, robot, and context. The framework is leveraged to derive insights into the methodologies, tasks, predictors, and outcomes studied. Last, the article synthesizes and discusses the findings from the reviewed literature, identifying avenues for future research in this domain.

1 Introduction

As advances in robotics continue, security robots are increasingly integrated into public and private security in the real world, enhancing protection in locations such as warehouses, streets, parks, shopping malls, and airports [22, 51, 59, 63]. In this study, “security robots” refers to robots deployed to prevent unwanted activities through their presence, surveillance, and ability to notify authorities of unauthorized personnel or actions. Security robots monitor and protect property and people within civilian environments (i.e., non-combat zones), including commercial, public, and residential areas. This broad definition acknowledges that any robot can be a security robot based on its actual use regardless of its intended purpose. Accordingly, the scope of security robots encompasses robots initially designed for non-security-related tasks but integrated into wider security operations for their capacity to surveil their environment.
Security robots can offer a cost-effective strategy for policing dangerous situations, thereby mitigating risks to law enforcement personnel [8, 57, 91, 108]. These dangerous situations can include protecting people and property against violence, vandalism, and other illegal activities [18, 23, 41, 62, 93, 104, 106]. For example, Knightscope, a popular artificial-intelligence-powered security robot, is being deployed to patrol busy parking structures, shopping malls, hospitals, and campuses to scan and detect potential threats in the surrounding area [31, 52]. According to a recent report, the global market of security robots is estimated at \(\$\)15.7 billion in 2024 and is expected to increase to \(\$\)29.7 billion by 2029 [68].
For security robots to be effective, they must interact with civilians and security personnel. However, their interactions in places like New York City have been fraught with problems and controversies [63]. These problems and controversies underscore the need to enhance our knowledge of their interactions with humans. To accomplish this, it is crucial to identify what we currently know about what facilitates or hinders human interactions with security robots [29, 76, 109]. This assertion is supported by a burgeoning literature on human interactions with security robots spanning fields like computer science, information science, psychology, and engineering. However, this literature remains fragmented and incoherent, limiting the potential for making scholarly progress and finding practical solutions.
This article addresses the pressing need to assess the existing literature, offer a unified perspective of the research domain, and identify areas for further exploration that would enhance our understanding. To accomplish this, our review offers three contributions. First, it presents the results of a systematic literature review on human interaction with security robots. Second, it adapts the human–robot integrative framework (HRIF) [75] to categorize the literature into three main thrusts. We leveraged the HRIF framework to derive insights into the methodologies, tasks, predictors, and outcomes utilized. Third, our review integrates and discusses the findings across the literature while highlighting important future research.

2 Literature Review

To conduct our literature review, we followed the preferred reporting items for systematic review and meta-analysis (PRISMA) statement [72] to identify all related work on human–security robot interaction. Our process is described later in this section.

2.1 Definition of Security Robots

Security robots can be categorized into design-specific security robots (DSSRs) and non-DSSRs (non-DSSRs). DSSRs are robots explicitly designed for and used for security purposes. For example, Knightscope K5 and RAMSEE are robots designed to fulfill security activities such as patrolling and monitoring areas. Non-DSSRs are robots that, although not specifically designed for security purposes, are used for security activities. For instance, a customer service robot may monitor and relay security-related data to a control room. Similarly, a home service robot can perform cleaning and gardening duties while surveilling the premises for potential risks.
Military robots are another example of non-DSSRs that are not considered security robots in this article. To differentiate between security and military robots, this article focuses on four key aspects:
Personnel: Operator of the robots (domestic private security or public law enforcement vs. military personnel);
Purpose: Intended function (providing private security, enforcing domestic laws vs. achieving military objectives);
Place: Operational environment (non-combat zones vs. combat zones); and
Protagonists: Target entities (domestic individuals, criminals vs. foreign adversaries).
Unlike security robots, military robots are specifically designed and used for warfare or activities involved in war or regional conflict. Unlike security robots deployed in non-combat zones by domestic security personnel to protect local populations and enforce domestic laws, military forces utilize military robots for warfare in active combat zones, often engaging with foreign adversaries. However, if robots were used by military personnel acting as domestic law enforcement to enforce local laws, they were labeled as security robots. For example, four studies focused on peacekeeping troops acting as domestic police using security robots [11, 12, 48, 58]. We labeled these as security robots because their purpose was not to engage in warfare but to act as local police officers in non-combat zones to enforce domestic laws. That being said, these were the only studies that seemed to fall within that gray area.
Likewise, the following robots do not belong to the category of security robots: (1) Rescue robots are responsible for urban search-and-rescue in disasters. Their primary purpose is saving lives in disaster situations. They focus on finding and extracting victims, not deterring threats or enforcing security measures. (2) Industrial robots perform dangerous tasks involving chemical, biological, radiological, or nuclear substances or dismantling explosive materials. Their focus is on operational safety and worker protection in hazardous environments. They are designed to handle dangerous materials and tasks, not to monitor for security threats. (3) Medical assistant robots remind patients to take their medication on time. Their role is patient care and support. They are not designed to monitor or control access to secure areas or detect intruders. Besides, the scope of security robots applies regardless of the robot’s form, including drones.

2.2 Search Process

A systematic search was conducted using four search engines: Google Scholar, ACM Digital Library, IEEE Explore, and Scopus. To establish our search terms, we initially conducted multiple searches and, through iterative refinement, decided on a specific set of search terms: (Security OR Peacekeeping OR Guard OR Police OR Military OR Safety OR Patrol OR Protection) AND (Robot OR Robots). During the procedure, we found that there is still a lack of common agreement regarding the academic terminology for security robots. Researchers sometimes use terms such as “guard robot” and “police robot,” among others. Therefore, we chose to expand beyond the single term “security robot” and include keywords associated with the functions and applications of security robots to prevent potential omissions. Data were gathered on 23 August 2023. In the search process, we manually reviewed the results using per search engine result pages (SERPs). We paged through the SERPs until we reached a page where no paper met our search criteria. All search results before that page were extracted to be reviewed. Each SERP typically displayed 10–25 results, varying by database, returning 4,449 results, with an additional paper found in the cross-referencing procedure. After removing duplicates, we identified 4,116 results.

2.3 Screening Procedure

After the initial search, we conducted a two-stage screening process and identified 47 results. Studies were first screened based on their title and abstract, then on their full-text content. The following inclusion criteria were used throughout all screening procedures. First, studies were user studies involving human participants. Second, studies were required to use or measure security robots. Third, studies were required to be written in English and published as articles or academic works.
As shown in Figure 1, the screening was conducted manually in the Rayyan application [71]. The first stage of screening was performed based on each paper’s title and abstract. Based on our eligibility criterion, we identified 544 eligible studies. Next, we conducted the second full-text screening stage and identified 47 papers for inclusion in the final review. Most excluded papers were technical papers focusing on software or hardware development without involving humans. One paper [82] was excluded because there was no access to the full text.
Fig. 1.
Fig. 1. PRISMA flow diagram of the literature review process.

3 Review Results

3.1 Publication Outlets

The most popular publication outlet was conferences, with 24 articles representing 51.06% of studies; this was followed by journals, with 19 articles representing 40.43%, and theses and dissertations, with three articles, or 6.38%. There was also one article belonging to the outlet of book chapters, representing 2.13%. A breakdown of publications by type is shown in Figure 2. The dominant publication venue was the IEEE International Symposium/Conference/Workshop on Robot and Human Interactive Communication (ROMAN), which published the most papers with eight, followed by the Human Factors and Ergonomics Society Annual Meeting with five, and the ACM/IEEE International Conference on Human–Robot Interaction with four. Frontiers in Psychology, IEEE Transactions on Human–Machine Systems, International Journal of Social Robotics, International Conference on Cross-Cultural Design, and ACM Transactions on Human–Robot Interaction each published two studies. The other 20 studies were all published in unique venues. Most publications were in outlets related to human–computer interaction (21), human factors (7), robotics (5), psychology (3), design (2), information (2), marketing (1), and cognitive science (1). One study was published in an outlet focused on reference services, one in an outlet for multimodal technologies, and three studies were theses or dissertations. Publication ranged from 2003 to 2023, as shown in Figure 3.
Fig. 2.
Fig. 2. Publications by type.
Fig. 3.
Fig. 3. Publications by year.

3.2 Sample Data

3.2.1 Sample Size.

The sample sizes of 47 studies are shown in Figure 4. The sum of all sample sizes across studies was 8,700 participants, with a mean value of 185 and a standard deviation of 188. The largest sample size was 1,009 participants [3], while the smallest sample size was 14 participants [86]. If we excluded these two outliers, the mean sample size would be 170 and the standard deviation 144, indicating that most papers had a large sample size.
Fig. 4.
Fig. 4. Participants by study.

3.2.2 Participants’ Age.

Twenty-five of 47 studies identified participants’ average age. The mean age across all studies was 32, with a standard deviation of 7 years. As shown in Figure 5, most studies had an average age between 20 and 40.
Fig. 5.
Fig. 5. Average participant age by study.

3.2.3 Gender.

The overall gender distribution across all studies was well balanced: female participants represented 48.49%, while male participants represented 51.51%. However, as Figure 6 illustrates, the gender distribution varied significantly across individual studies, and many distributions were uneven. Three studies had an overly large sample of male participants, more than 70% [54, 86, 110]. Fourteen studies did not specify their gender distribution.
Fig. 6.
Fig. 6. Gender balance by study.

3.3 Interaction Methods, Application Domains, and Tasks

3.3.1 Interaction Methods.

Studies in the review used five approaches to facilitate human–security robot interaction. Among them, 34.0% of studies used real physical robots. Image/video materials were utilized in 25.5% of studies, wherein participants viewed images/videos depicting the security robot and answered survey questions about their actual or anticipated interaction experience. Virtual environments were used to build interaction scenarios for 12.8% of studies. Eleven other papers, or 23.4%, used questionnaires to measure attitudes without providing physical security robots, videos of security robots, or images. These studies investigated individuals’ perceptions of various types of robots, including security robots. Finally, two papers (4.3%) utilized a participant-centered design approach to measure people’s perceptions of future robots [53, 87]. These studies prompted participants to envision scenarios, possible tasks, and applications of security robots.

3.3.2 Application Domains.

Among the 47 papers, 43 discussed the application domains of security robots, while four merely mentioned “security robots” without providing further details in their questionnaires. Based on the first 43 papers, we categorized the application domain of security robots into five main categories, as shown in Table 1: guard robot, police robot, military/peacekeeping robot, service robot, and general security robot. The guard robot, accounting for 32.6%, was the most popular category. Private agencies deploy these robots in diverse settings like campuses, airports, and markets to monitor, patrol, access control, and detect.
Table 1.
Application domainsDescription
Guard robotDeployed by private agencies in settings such as campuses, airports, and markets to provide guard and protection.
Police robotDeployed by official police departments to fight crime and ensure safety in public areas.
Military/peace-keeping robotDeployed in the military and army to protect the safety of civilians or soldiers.
Service robot1. Public service robot: Deployed by agencies in hotels or markets to provide comprehensive services, including protecting the safety of people by reminding them to wear masks and patrolling to monitor surroundings.
2. Private/home service robot: Deployed by individuals or families for house security tasks to protect the safety of family members or individuals.
General security robotDeployed to perform various security tasks without specific application scenarios or potential employers.
Table 1. Application Domains
Service robots used for security accounted for 25.6% of the studies. These papers can be divided into public and private/home service robots. Public service robots, deployed by agencies, interact with the general public in spaces typically accessible to all, such as hotels and shopping malls, offering comprehensive services. These services include ensuring that individuals comply with rules (for example, by reminding them to wear masks), patrolling to monitor surroundings, and admonishing inappropriate behavior. Private/home service robots, utilized by individuals or households, engage with private individuals within residential settings, primarily addressing household security tasks by monitoring private buildings that are typically not open to the public. Military/peacekeeping robots were studied in 16.3% of papers, and these examined robots deployed in military settings used to protect civilians and military personnel. Police robots accounted for 14.0% of studies. These studies focused on robots deployed by official police departments to fight crime and ensure safety in public areas. These robots perform tasks such as monitoring, surveillance, issuing admonishments if necessary, and assisting police officers in performing security tasks. Finally, general security robots, in 14.0% of studies, perform various security tasks, but these studies did not specify particular application scenarios or potential employers.

3.3.3 Security Tasks.

Here, we summarize the security tasks that were used across all papers. Thirty-two studies used tasks that we categorized into six groups to create interaction between participants and security robots, as shown in Table 2. The other 15 papers only mentioned “security robots” and “security tasks” through questionnaire items or interview questions without creating real interactions. The most popular task type was labeled as access control tasks, representing 40.6% of these studies. Access control tasks require the robot to check a participant’s ID for gaining access to a particular area. This often includes having the robots introduce themselves and check participants’ identification before giving them access to a room or building. Two studies used a robot to guard the exit and instructed participants not to use a particular exit but to use an alternative route [4, 5].
Table 2.
TaskDefinitionNumberPaper
Access control taskMonitor and control access to a specific area.13 (41%)[4], [5], [34], [40], [41], [50], [54], [5962], [93], [94]
Integrated security tasksSimultaneous execution of multiple security functions.7 (22%)[3], [25], [56], [70], [88], [89], [108]
Military security taskAssist military units and provide security.5 (16%)[11], [12], [48], [57], [58]
Admonish taskAdmonish inappropriate behaviors.2 (6%)[67], [81]
Protect individual taskProtect an individual human.2 (6%)[16], [24]
OthersOther security tasks.3 (9%)[13], [46], [86]
Total32 (100%) 
Table 2. Security Tasks
The second most popular task type was integrated security tasks, representing 21.9%. These papers used multiple security tasks at the same time. For example, these security robots might introduce themselves, guard a park entrance/exit, investigate a disturbance, patrol a road, escort an individual, detect intrusion, help users lock doors, and remind of potential emergencies.
The third popular task type was military security tasks, representing 15.6% of studies. Military security tasks involve the robot providing security for a military unit. For example, participants act as military security personnel traveling with a convoy along a route with a robot partner attempting to identify possible terrorist threats. This also includes modified errand tasks designed to tax participants cognitively.
The fourth task type, admonish tasks, was found in two studies (6.2%). In these studies, robots admonished participants when they performed inappropriate behaviors, such as using smartphones while crossing the road. The fifth task type is the “protect individual” task, used by two studies (6.2%), in which robots engaged in defensive force to protect a human victim from violence.
Finally, three studies each utilized different security interaction tasks. One study [46] required participants to act as potential suspects and be interrogated by the security robot. Another study [13] required participants to operate the robot to retrieve objects, use the pan-tilt camera mechanism to look around, toggle two-way audio/video streams for communication, and drive and steer the robot base. The third study required participants to cooperate with the robot to perform law enforcement clearing operations [86].

3.4 Outcomes

In this review, outcomes are dependent measures utilized in the studies to assess human–security robot interactions. This review categorizes these outcomes into three broad groups: robot acceptance, perception of robots, and human user performance. This categorization facilitates the organization and comparison of results across studies. Each outcome category is discussed in the following sections.
Robot acceptance studies focus on encouraging humans to interact with the security robot and accounted for 87.2% of the studies. This category covers outcomes related to both behavioral and perceptual measures of robot acceptance. Behavioral acceptance, accounting for 21.3% of studies, encompasses actual engagement outcomes such as interactions with robots, active responses, distance from robots, and denial behavior.
Perceptual acceptance, accounting for 66.0% of studies, includes factors such as acceptance, trust/reliance, perceived trustworthiness, intention to use, perceived behavioral control, satisfaction, preference, and expectations regarding robots [27]. The trustworthiness category also includes sub-items such as competency, while the intention to use category includes buying intentions, perceived usefulness, and perceived ease of use. Perceived behavioral control represents the perception of ease or difficulty in performing the behavior of interest [9, 88]. These perceptual outcomes have all been associated with perceptual measures of robot acceptance [27].
Perception of robots is the second broad category and accounted for 38.3% of the studies. Perception of robots studies focused on measuring or changing human perceptions of the security robot. This category encompasses outcomes such as perceived robot image, attitude toward robots, and interpretation of the robot’s intent. This category also includes the Godspeed questionnaire, a popular and widely used measure that includes perceived intelligence, perceived safety, animacy, anthropomorphism, and likeability of robots [10]. Robot image includes variables such as perceived masculinity and femininity, and positive (e.g., friendly) and negative (e.g., intimidating) images.
Human performance constituted 2.1% of the studies. It refers to participants’ measures of cognitive and behavioral performance collected throughout the study. It includes outcomes such as situational awareness, workload, and cognition. Table 3 depicts the details information of the studies and the specific outcomes they measured.
Table 3.
OutcomesStudies
Human PerformanceSituational awareness: [86]
Workload: [86]
Perception of RobotsRobot image and evaluations: [4], [5], [20], [26], [40], [41], [46], [55], [56], [88], [93], [103]
Attitudes: [5], [47], [48], [53], [64], [88], [93]
Godspeed: [4], [5], [20], [54], [55], [108]
Interpretation of intent: [70]
Robot Acceptance1. Behavioral Acceptance
Engagement in interaction and response: [12], [54], [56], [59], [67], [81], [93], [94]
Distance from robots: [48], [93]
Denial behavior: [46]
2. Perceptional Acceptance
Trust/reliance intentions: [11], [13], [14], [34], [50], [54], [57], [58], [6062], [64], [70], [78], [86], [88], [108]
Trustworthiness: [14], [34], [50], [60], [62], [63], [70], [108]
Intention to use: [34], [6062], [78], [108], [111]
Satisfaction, preference, and expectations: [3], [16], [25], [26], [35], [47], [54], [63], [78], [83], [8789], [103], [110]
Perceived behavioral control: [88]
Acceptance: [16], [24], [63]
Table 3. Outcomes

3.5 Research Thrust Areas

Literature on human–security robot interaction has explored various topics. Following the Human–Robot Integrative Model proposed by Robert [75], we summarized research topics into three main thrust areas: the human in human–security robot interaction, the robot in human–security robot interaction, and the contextual factors in human–security robot interaction. As shown in Tables 4, 5, and 6, many studies looked at multiple factors in different areas at the same time, which reveals the potential interaction among individual factors, robot factors, and contextual factors.
Table 4.
Thrust Area 1: The Human in Human–Security Robot Interaction
1-1 Gender[14], [26], [34], [40], [41], [63], [103], [111]
1-2 Demographics (age, education, technology experience)[26], [40], [78], [83]
1-3 Attitude[50], [64]
1-4 Mental model[70]
1-5 Personality[61]
1-6 Obedience behavior[4], [5], [81]
1-7 Guilt status[46]
Guiding Question: How do human attributes impact human–security robot interaction?
Table 4. Topics in Thrust Area 1
Table 5.
Thrust Area 2: The Robot in Human–Security Robot Interaction
2-1 Robot gender[14], [78], [88], [89]
2-2 Robot’s non-physical design (stated social intent, autonomy, personality, threat analysis type, interrogation approach)[46], [57], [60], [6264], [88]
2-3 Robot’s physical design (anthropomorphism, outlook, size, voice/volume, battery life, camera and view)[13], [35], [5456], [108]
2-4 Robot’s non-physical behavior (politeness, communication style, cognitive dissonance intervention strategy)[40], [41], [58], [81]
2-5 Robot’s physical behavior (weapon, stand/approach/attack/defense behavior)[5], [12], [24], [67], [87]
2-6 Reliability[62]
2-7 Robot presence[56], [63], [86]
Guiding Question: How do robot attributes impact human–security robot interaction?
Table 5. Topics in Thrust Area 2
Table 6.
Thrust Area 3: The Contextual Factors in Human–Security Robot Interaction
3-1 Robot task[48], [58], [93], [94]
3-2 Agent type[3], [20], [24], [25], [41], [46], [59], [78], [93]
3-3 Context and usage[57], [63], [108]
3-4 Culture[11], [16], [47], [48], [53], [54], [58], [78], [110]
Guiding Question: How do contextual factors impact human–security robot interaction?
Table 6. Topics in Thrust Area 3

3.5.1 Thrust Area 1: The Human in Human–Security Robot Interaction.

Theme 1 studies, accounting for 38.3%, investigated human attributes on human and security robot interactions. Researchers took human characteristics as independent variables and examined their impact on one or more of the three categories of outcomes. Table 4 shows the detailed themes in thrust area 1. Demographic factors are the most popular topic examined, with gender being one of the most investigated demographic factors, representing 17.0%. Studies focusing on other demographic factors such as age and education accounted for 8.5%. Three studies explored the impact of human obedience behavior, and two examined the influence of participants’ attitudes. Additionally, one studied mental models, another personality, and another human guilt status.

3.5.2 Thrust Area 2: The Robot in Human–Security Robot Interaction.

Thrust area 2 consisted of 55.3% of papers in this review. These studies investigated how robot attributes impact the interaction between humans and security robots. Table 5 presents the detailed themes in this area. Multiple robot-related factors have been explored in this thrust area. Robot non-physical and physical design are the two most popular topics in this area, each one accounting for 31.8% of the research. Robot’s non-physical design category includes factors such as threat analysis type, autonomy, personality, stated social intent, and interrogation approach. Threat analysis type refers to the robot’s assessment of whether a threat exists based on physical or psychological factors. Physical-based evaluation involves the analysis of heat patterns in buildings or weapon-shaped X-ray images, for example. In contrast, psychological-based evaluation involves the analysis of human intentions based on facial expressions, eye movements, pulse rate, etc. The stated social intent of robots means that the robot would inform people whether its benevolence is directed toward the visitor, the building occupants, itself (self-protective), or toward the visitor with self-sacrifice (maximize protection to visitors even if the robot can be destroyed). The interrogation approach refers to when the robot interrogates suspects, questioning them with either an innocent-presumptive or guilt-presumptive stance. An innocent-presumptive stance assumes the suspect did not commit the crime, while a guilt-presumptive one assumes the suspect is guilty. The robot’s physical design encompasses factors such as anthropomorphism, size, voice or volume, battery life, and cameras.
The third most popular topic in this thrust area is robots’ physical behavior, explored in five studies. These studies covered topics such as standing, approaching, attacking, defensive behavior, and the use of weapons. Four studies also examined the robot’s non-physical behavior, such as its politeness during communication. This category also includes a study that examined the use of a cognitive dissonance intervention strategy by a robot [82]. The robot utilizing this strategy would utter sentences encouraging participants to reduce cognitive dissonance and not ignore the robot. Finally, the status of robot presence and the reliability of robots were also explored.

3.5.3 Thrust Area 3: Contextual Factors in Human–Security Robot Interaction.

Thrust area 3 consisted of 44.7% of papers. These studies looked at the influence of interaction factors and how contextual factors impact human–security robot interaction. Agent type was one of the most popular themes in this area, representing up to 19% of the studies in this review. These studies compared the preference between human security agents and robot security agents for a particular security task. Culture was another of the most popular themes, accounting for 19% of the studies in this review. Studies in this category discussed the influence of different national cultures and resident political environments. Robot task was the third most popular topic in this area, investigated in 8.5% of the studies. Three studies within this domain specifically explored how task type can change the utilization of security robots. Table 6 provides an overview of the studies and the themes they investigated.

3.6 Findings

Overall, the literature reviewed can be organized collectively into human, robot, and contextual factors focused on examining three outcome categories, as illustrated in Figure 7. In the literature, the predominant focus lies on robot factors (55.3%), with fewer studies examining contextual factors (44.7%), and even fewer, human factors (38.3%). Robot acceptance is the most popular outcome (87.2%), followed by perceptions of robots (38.3%). User performance is the least used outcome (2.1%). Next, we present the findings, which are discussed and organized by the three main thrust areas: the human, the robot, and the contextual factors. Key takeaway findings are summarized at the end of each section.
Fig. 7.
Fig. 7. Human–security robot integrative research model.

3.6.1 Thrust Area 1: The Human in Human–Security Robot Interaction.

In this review, 38.3% of studies examined human factors, which include demographics such as gender and age, attitude, mental model, personality, human obedience behavior, and guilt status. These studies focused on perceptions of robots (8 studies), perceptional robot acceptance (11 studies), and behavioral robot acceptance (2 studies).
Perception of Robots. Eight studies investigated the influence of human factors on perceptions of robots. Four examined the influence of gender on attitudes and images of security robots, finding mixed results [26, 40, 41, 103]. Three studies examined participants’ gender and did not find a significant effect of human gender on perceived threat [40, 41], fairness [40, 41], functionality [40, 41], correctness [40, 41], or affective evaluations [26] of security robots. However, one study found that women perceived the adoption of domestic robots, which perform household chores and personal security tasks, as a much riskier proposition than men did [103].
Two studies examined age, both finding non-significant results [26, 40]. One study examined the effect of age and did not find any significant effect on the security robots’ image, including perceived threat, fairness, functionality, or correctness [40]. Another study compared people’s positive evaluations of security robots between age groups younger than 25 and those older than 25 and again did not find significant differences [26].
Two studies investigated participants’ perceptions of security robots and their compliance with the robot [4, 5]. In one study, participants who complied with the robot’s directives perceived it as safer and somewhat less assertive when compared to non-compliant participants [4]. Additionally, the same study found that obedient individuals attributed higher anthropomorphic qualities to the security robot and felt it was marginally less aggressive, albeit insignificantly. A similar study examined these relationships again and found that individuals obedient to the security robot rated it as less aggressive and safer, with higher perceived intelligence, higher anthropomorphism, higher sense of responsibility, and more comfort compared to those who disobeyed the robot’s instructions [5]. Although all these trends were non-significant, they aligned with previous findings.
Only one study investigated the influence of individuals’ technical background on their perception of security robots [26]. This research uncovered that individuals with technical expertise hold more favorable opinions of security robots than those without.
Mental models were only examined in one study [70]. Participants with the security guard task-role mental model had a better interpretation of robot security guard behaviors’ intent than participants with no such prescribed mental model. Also, participants with the security mental model perceived non-security behavior less accurately, which indicates the importance of matching mental models. The correct mental model was also positively associated with participants’ perceived robot intelligence and safety.
The impact of the guilt status of human suspects on people’s perceptions of robots was also examined [46]. Study participants were separated into two cohorts: one comprising individuals undergoing robot-led interrogations and the other of observers witnessing these interactions. Participants in both roles evaluated the robot’s characteristics. Subjects designated as “suspects” were further categorized as “guilty” or “innocent” and then interacted with the robotic interrogators. The results indicated that suspects’ perceptions of the robot’s exertion, pressure, anxiety induction, and friendliness were the same, regardless of their assigned guilt status. However, observers felt that robots appeared to apply more effort and pressure during interrogations when suspects were presumed guilty rather than innocent.
Key takeaways:
(1)
Although gender does not significantly affect overall perceptions of security robots, women view the adoption of domestic robots as riskier than men do, indicating complex gender attitudes toward different robot types.
(2)
Age does not seem to explain perceptions of security robots, indicating that other factors may be more influential in shaping attitudes toward these technologies.
(3)
Compliant individuals viewed security robots as safer and less aggressive and associated them with higher perceived intelligence and comfort, though these findings were not statistically significant.
(4)
Users with a pre-existing security guard mental model interpret robot behaviors better and perceive them as being more intelligent and safer.
Robot Acceptance. Thirteen studies examined the impact of human-related factors on robot acceptance, with two measuring behavioral robot acceptance and 11 measuring perceptional robot acceptance.
1. Behavioral Acceptance. Two studies in thrust area 1 examined behavioral robot acceptance. Human obedience behavior toward security robots was examined through interviews aimed at understanding causes of non-compliance [81]. These researchers found that cognitive dissonance reduction explained why many individuals ignored the warnings issued by the robots. Many participants justified their non-compliance by downplaying the importance of the robot’s instructions, considering them trivial. Others adjusted their reasoning, either enhancing the significance of their actions or attributing non-compliance to the reason that few people were around. The influence of a suspect’s presumed guilt status on denial behavior was also examined [46]. The study found that the guilt status of the suspects did not significantly affect their self-evaluation of denial behavior, including anxiety, defensiveness, friendliness, or forcefulness. Additionally, observers’ judgments of guilt were unaffected by the suspects’ actual guilt or innocence. Further, there was no notable difference in observers’ assessments of suspects’ denial behaviors in terms of defensiveness or anxiety, and observers noted that suspects tended to deny more vehemently when they were innocent rather than guilty.
Key takeaways:
(1)
Cognitive dissonance explains non-compliance with security robots’ instructions because users often trivialize warnings or justify ignoring the robots’ directives.
(2)
The perceived guilt status of suspects does not reliably predict human responses or interactions with security robots.
2. Perceptional Acceptance. Eleven studies explored the impact of human factors on perceptional acceptance. Six studies examined the effect of gender and acceptance, with three finding evidence that women prefer and trust security robots more than men [14, 26, 34, 63, 78, 111]. One study found that men and women differ significantly in their reliance intentions and trustworthiness toward security robots [34]. Women were found to have higher reliance intentions on robots and higher perceived trustworthiness than men. Additionally, women were more likely to use security robots in hospitals and on college campuses than men. However, no differences were found between men’s and women’s intentions to use security robots in other public or in military settings. Another study investigated the home service robot and found that women participants emphasized security and safety functions more than men did [111]. A separate study utilized interviews to explore public opinions regarding security robots [63]. This study found that women are more inclined to accept these robots, potentially because they feel safer with them than with men. Additionally, women perceived security robots as posing a lower risk of violence than human security.
However, three studies found no difference between male and female participants regarding their acceptance of security robots. One study did not find a significant result on either trust or perceived competency [14]. Similarly, another study found that gender did not significantly impact occupational competency and trust of the security robots [78]. Another study investigated the impact of gender on people’s expectations regarding the time for security robots and related scenarios to become a reality in the future [26]; no significant differences were observed in the expectations of male and female participants regarding this time.
The influence of human age on acceptance was investigated in two studies, yielding conflicting results [26, 83]. One study revealed that confidence in security robot capabilities varies across human age groups [83]. Specifically, awareness of technology-enabled home safety control is generally higher among adults, while it tends to be lower in younger and older demographics. Notably, a quarter of elderly participants perceived home safety control as an unattainable task for robots. Another study examined the impact of age on anticipation for the future deployment timeline of security robots between individuals younger and older than 25 and found nonsignificant results [26].
One study examined the impact of individuals’ technical backgrounds on their expectations and time frame for implementing security robots in the real world [26]. This research revealed that, compared to non-technical individuals, those with a technical background are more optimistic, believing that the deployment of security robots will occur in a shorter time frame.
Two studies explored the relationship between attitude toward robots and trust [50, 64]. One study found that attitudes were significantly associated with trust [64]. These findings align with another study, which demonstrated a notable positive correlation between attitude and trustworthiness [50]. Also, trustworthiness (ability, benevolence) was found to have a significant effect on trust, indicating that trustworthiness would affect the trust model [50]. Apart from that, this study examined the relationship between human likeness and trustworthiness and the relationship between positive affect and trustworthiness and all did not find significant results.
Only one study examined the influence of personality and perfect automation schema (PAS) and found they are significant predictors of trust and use intentions [61]. Personalities are assessed based on five dimensions: agreeableness, conscientiousness, extraversion, neuroticism, and intellect/imagination. On the other hand, PAS captures individuals’ attitudes and expectations toward advanced technology, encompassing two dimensions: high expectations and all-or-none beliefs. High expectations refer to people’s high expectations toward technology, while all-or-none beliefs indicate that individuals view technology as either perfectly functioning or completely broken. The results found that agreeableness, intellect/imagination, high expectations, and all-or-none beliefs significantly correlate with trust. Extraversion, agreeableness, intellect/imagination, and high expectations had significant correlations with public and military use intentions. Agreeableness and high expectations were associated with higher trust and a greater public and military desire to use. The trait intellect/imagination was associated with lower trust and desire to use. Extraversion was associated with a greater desire to use, and all-or-none beliefs were associated with lower trust [61].
Two other studies looked at different variables [70, 78]. One of these examined the impact of mental models and found a significant positive association between having the correct mental model and trustworthiness, while no significant association was observed with robot power [70]. Another study examined the influence of education level and people’s comfort with interacting with new robots [78]. This study found that a higher comfort level of participants leads to greater trust, perceived occupational competency of security robots, and a stronger preference for security robots over male or female human agents. Also, education levels significantly impact people’s trust in security robots and their degree of preference for security robots over human female agents, although the direction of this impact was not reported [70]. However, this study did not find an impact of education levels on people’s perception of occupational competency or their degree of preference for robots over human male agents.
Key takeaways:
(1)
Mixed results were found regarding the impact of human gender on security robot acceptance, while all significant results suggest that women show greater trust and intention to use security robots than men.
(2)
The impact of age on acceptance is unclear, with the literature finding mixed results.
(3)
Familiarity with technology is associated with more optimistic expectations about deploying security robots.
(4)
Specific personality traits, particularly agreeableness and intellect/imagination, strongly predict trust and intentions to use security robots.

3.6.2 Thrust Area 2: The Robot in Human–Security Robot Interaction.

In this review, 55.3% of studies examined robot factors, which include the robots’ gender, physical and non-physical design, physical and non-physical behavior, reliability, and presence. They focused on perceptional robot acceptance (17 studies), behavioral robot acceptance (5 studies), perceptions of robots (9 studies), and user performance (1 study).
Human Performance. One paper investigated the influence of robot factors on user performance. This study assessed police officers’ use of security drones for operational tasks and found that the use of drones decreased completion time and reduced the number of targets overlooked compared to not using drones [86]. The study assessed situational awareness among participants and identified an enhancement in information quality with drone deployment while the overall mental workload remained consistent. Notably, temporal demand decreased with drone use. This study also contrasted the cognitive demands of operating a single drone on one monitor versus controlling a swarm of drones on single or multiple monitors. The findings indicated an increased mental workload when managing multiple drones, irrespective of the monitor setup [86]. Single-monitor configurations for multiple drones led to perceptions of greater time pressure than using multiple monitors. Despite no significant variations in participants’ stress or insecurity levels, the complexity was perceived to be higher when supervising a multi-monitor drone swarm compared to a single drone operation.
Key takeaways:
(1)
The use of security drones improves performance and information quality without increasing the overall mental workload.
(2)
Managing a swarm of security drones, particularly on a single monitor, increases perceived complexity and time pressure compared to operating a single drone.
Perception of Robots. Nine studies investigated the influence of robot-related factors on human perception of the robot. Four papers [5456, 108] examined robot appearance, with three specifically investigating the impact of anthropomorphic design [54, 55, 108], yielding similar results when it comes to likability but conflicting results when it comes to safety. One study manipulated different robot types (anthropomorphic, zoomorphic, machine-like) to examine the interaction between a robot’s appearance and task, finding no significant results in perceived likability [54]. Another study examined the direct influence of anthropomorphism on the perceived likability, intelligence, and safety of security robots and whether this relationship was moderated by the interaction scenario. However, all relationships were found to be non-significant [108]. In contrast, another study explored the impact of anthropomorphism on home security robots, discovering that robots with humanlike features were rated as significantly less physically safe and of lower quality compared to robots without humanlike features [55].
Another study examined the impact of adding a university logo or flashing lights on a COVID-19 security robot [56]. The university logo or flashing lights increased perceptions of the robot’s authoritativeness, with the logo perceived as more authoritative than the flashing lights [56]. However, the security robot design that did not include a logo or flashing lights was perceived as friendlier and less aggressive. On the contrary, there was no difference in perceptions of the robot’s innovativeness, inviting nature, reliability, professionalism, or elegance between robots with and without the logo and flashing lights.
The impact of the politeness of robots was examined in two studies, both reporting significant effects [40, 41]. Polite security robots were perceived by humans as friendlier, fairer, and displaying more appropriate behavior [41]. Conversely, impolite security robots were perceived as more threatening and unfair, with lower perceived functionality and correctness compared to polite counterparts [40].
The gender, personality, behavior, and interrogation approaches of robots have also been individually examined. Security robots gendered as male were seen as having higher affective evaluations, attitudes (marginally), cognitive evaluations (marginally), and subjective norms than the female-gendered security robots [88]. Introverted security robots were seen as having more positive affective and cognitive evaluations and greater subjective norms (marginally) compared to extraverted robots [88]. Security robots that used body movements to convey messages were seen as more aggressive than robots that carried a signboard to convey messages [4]. Finally, one study [46] investigated suspects’ and observers’ perceptions of interrogator robots employing either innocent-presumptive or guilt-presumptive interrogation approaches. The results indicated that human suspects did not perceive the robots using different interrogation approaches as exerting varying levels of effort to elicit confessions, nor did they perceive any differences in their friendliness. However, there was a marginally significant difference in their perceived pressure of the robot [46]. Additionally, there was a significant difference in the perceived anxiety of the robot, with those using innocent presumptions appearing less anxious. As for the observers’ perceptions, their predictions of the robot’s judgment depended on the interrogation approach. They perceived that the robot tried harder and applied more pressure to obtain a confession under guilt-presumptive assumptions.
Key takeaways:
(1)
Anthropomorphic robots are perceived as less safe than non-anthropomorphic designs, but there is no difference in likability.
(2)
Design features can influence perceptions of a robot’s authority and approachability but do not affect perceptions of innovativeness, reliability, or professionalism.
(3)
Robot politeness strongly influences people’s perceptions of security robots, with polite security robots being viewed more favorably than impolite security robots.
(4)
Male-gendered security robots receive higher affective evaluations than female-gendered ones. Additionally, introverted robots are perceived more positively than extraverted ones.
Robot Acceptance. Twenty-one studies investigated the influence of robot-related factors on humans’ acceptance of robots, with five measuring behavior and 17 measuring perception.
1. Behavioral Acceptance. Five papers investigated the influence of robot factors on behavioral robot acceptance [12, 46, 54, 56, 67]. A security robot’s design features were examined in two studies [54, 56]: one concentrated on anthropomorphic design [54], while the other examined the impact of flashing lights [56]. The robot’s appearance (anthropomorphic, zoomorphic, or mechanical) along with its assigned role (security or tour guide) did not significantly influence participants’ engagement or active response to the robots [54]. However, red and blue flashing lights did increase participants’ engagement, although it did not influence their tolerance for completing the interaction session [56]. A robot’s presence also increased compliance among passersby, who were more inclined to wear their masks as requested when interacting with or noticing the robot [56].
Three studies investigated the impact of a security robot’s behavior on human behavior. One study revealed that admonishing pedestrians significantly increased the success rate of halting inappropriate behaviors, such as phone use while walking, compared to being more friendly [67]. However, it was noted that admonishing pedestrians was not effective in persuading them about the wrongness of their actions; instead, they stopped primarily due to the surprise of the initial encounter. Another study examined the impact of arming the security robots with a lethal or non-lethal weapon [12]. The findings revealed that participants were more inclined to comply with robotic peacekeepers when they were equipped with lethal backup weapons compared to non-lethal ones. Participants also decided to comply with a robot more slowly when it was guarding a checkpoint equipped with a lethal weapon but made the decision most rapidly when it was guarding a checkpoint without a lethal weapon. One study also investigated the impact of robot interrogation approaches [46]. In this study, researchers recruited two groups of participants; one was assigned the role of human suspects interacting with an interrogator robot, and the other was assigned as an observer watching the interaction. The robot used an interrogation approach that was either innocent-presumptive or guilt-presumptive. The innocent-presumptive approach involved the robot asking questions as if the suspect were innocent and did not commit the crime. In contrast, the guilt-presumptive approach involved asking questions assuming the suspect was guilty. The results indicated that human suspects tended to be more friendly toward robot interrogators when the approach was innocent-presumptive rather than guilt-presumptive. Also, human observers rated the suspects’ denial behavior as more defensive and more vehement in their denials when faced with guilt-presumptive interrogators, but no differences were observed in terms of anxiety. However, human suspects believed their denial behavior did not differ in terms of anxiety, defensiveness, or forcefulness. The observers’ judgments of suspects’ guilt were also found to be independent of the robot interrogator’s approach.
Key takeaways:
(1)
Visual design features can affect interaction dynamics with security robots.
(2)
The presence of a security robot can increase compliance.
(3)
Admonishing behavior from security robots effectively increases compliance.
(4)
People are more inclined to comply with security robots equipped with lethal weapons than non-lethal weapons.
(5)
Suspects are friendlier toward security robots using an innocent-presumptive interrogation approach as opposed to a guilt-presumptive interrogation approach.
2. Perceptional Acceptance. Seventeen studies investigated various factors related to robots and their impact on perceptional acceptance. The influence of gendering security robots on their acceptance was investigated in four studies [14, 78, 88, 89]. Half of these studies reported a preference for male security robots over female ones, while the remaining studies found no discernible differences in acceptance based on gender. In the studies that found differences, male-gendered security robots, manipulated through voice and names, were seen as more useful [89] and marginally easier to use [89], and they had greater perceived behavioral control (marginally) [89] than female-gendered security robots, explaining why participants also reported higher intention to use male security robots over female security robots [88, 89]. In contrast, two other studies indicated that the robot’s gender had no effect on various measures of trust and preference compared to human security personnel [14, 78].
Three studies examined the impact of robot anthropomorphic design, with one supporting the claim that anthropomorphic design enhances security robot acceptance [108], while the other two did not [35, 54]. The anthropomorphic appearance of a security robot was positively linked to trustworthiness and intention to use [108]. However, contrasting findings suggest that the anthropomorphic appearance has no significant impact on trust [54, 108] or satisfaction [54] and is negatively associated with the preference for robot [35]. This discrepancy in findings may be attributed to potential moderators. For instance, the significance of a robot’s anthropomorphic appearance was found to vary depending on the social demand of the role. Humans tended to prefer more human-like robots for socially demanding tasks, while they favored machine-like robots for less socially demanding tasks [35]. The discrepancy in findings may also be attributed to variations in the definition of humanlike and machine-like robots—we note that each study operationalized anthropomorphism somewhat differently. Besides these studies, one study [13] investigated law enforcement officers’ acceptance of a communicative security robot, finding a high level of trust among the officers. They interviewed participants to identify design factors contributing to trust and found that the robot’s anthropomorphic appearance was unimportant for nearly half of the officers. Instead, participants highlighted factors such as size, voice, volume, emotion expression capability, battery life, and camera views as important considerations of reliability.
Two studies investigated whether the type of threat detection employed by a security robot influenced whether its recommendation was accepted, finding mixed or conflicting results [57, 64]. The two types of threat analyses examined were physical-based analysis and psychology-based analysis. The physical-based analysis involves the robot discerning threats through direct physical cues, like detecting chemicals or identifying weapon-shaped objects in X-ray images. Conversely, psychology-based analysis entails the robot interpreting human intentions, utilizing information such as facial expressions and eye movements. One study used text-based scenarios and observed a significant impact on trust; participants exhibited higher confidence in the robot’s physics-based analysis and were more inclined to follow its recommendations than those from the psychological-based analysis [64]. Conversely, the other study used a virtual environment with the same type of threat detection but found no significant main effect on trust [57].
Two studies examined the impact of a robot’s expressed social intent on both trustworthiness and trust, yielding conclusive results for trustworthiness but mixed findings for trust [60, 62]. In both studies, the security robot informed participants that its social intent was to protect the human visitors, to protect the building occupants, to be self-protective, or to be self-sacrificial. Self-protective means that it would prioritize protecting the robot itself; protecting visitors means maximizing protection and well-being for visitors; protecting occupants means maximizing protection for personnel within the secure area; and protecting visitors with self-sacrifice means prioritizing the safety of the visitors over the robot, even if the robot were to be destroyed. One study found that the social intent of a security robot had a significant impact on people’s perceived integrity and benevolence but not on trust or perceived ability [62]. The other study found a significant impact on perceived integrity, benevolence, and trust but not on perceived ability or the desire to use [60]. The condition of robot self-sacrifice was found to be associated with higher perceived integrity and benevolence in both of these studies [60, 62] and higher trust in one study [60]. The interaction between robot autonomy and stated social intent was also found [60]. When the robot was intended to protect occupants, participants’ perceived ability and integrity were higher when it had low autonomy than when it had high autonomy. However, when the robot’s intent was self-sacrifice, participants’ perceived ability and integrity were higher when the robot had high autonomy than when it had low.
The impact of security robot autonomy was examined in two studies: a quantitative study finding non-significant effects [60] and a qualitative study highlighting serious concerns with autonomy [63]. In the quantitative study, the degree of autonomy did not impact the participants’ trust, trustworthiness, or intention to use [60]. However, the other study was conducted with semi-structured interviews to understand public perceptions of security robots, and the robots’ level of autonomy was identified as a major concern [63]. Participants expressed concerns about the possibility of security robots being hacked or hijacked.
Beyond that, robots’ presence, reliability, personality, directive communication style, defensive behavior, and weapon utilization were all examined in different studies. A semi-structured interview study to understand the public’s opinions of security robots discovered that participants believed the one significant benefit of these robots was their ability to deter crime merely through their presence [63]. Another study examined the effect of security robots’ reliability and found it has a significant influence on trust and trustworthiness [62]. Specifically, when the security robot accurately denied access to unauthorized individuals, participants trusted it more and thought it had higher ability and integrity. Another study examined the effect of security robots’ personalities on acceptance and found that introverted security robots have higher perceived trust, perceived behavioral control, and acceptance compared to extraverted robots [88].
The impacts of communication style and trust elements were explored in peacekeeping robots [58]. Trust elements include emotion, behavior, and cognition. Participants showed higher trust in robots using the analytic directive style over the comparative style. Also, participants reported higher trust for emotional-based appeals than behavioral-based and cognitive-based appeals.
A security robot’s defensive behavior on human acceptance was also examined [24]. This study found that the method of defense significantly influences acceptability, with less forceful approaches being preferable. Specifically, blocking was perceived as more acceptable than non-lethal force, and both were deemed more acceptable than lethal force. Additionally, the use of non-lethal defense was considered more acceptable in response to lethal attacks than to non-lethal attacks. Law enforcement officers’ interaction with drone swarms was also examined [86]. This study found no significant difference in officers’ trust when comparing the use of a single monitor for one drone to that of multiple monitors or a single monitor controlling a swarm of drones. Finally, a qualitative study examined the acceptance of weapon utilization [87]. These researchers conducted a qualitative study asking participants to envision their ideal domestic robots. The results revealed that one of the popular robots was a house-sitting robot capable of overseeing a home to maintain order and security. Participants imagined a robot that could monitor their physical property and patrol both inside and outside the house. They envisioned a systematic collaboration between mobile robots and in-house surveillance systems as the preferred operational mode. Interestingly, six households specified wanting a security robot but without the risk of it being armed. They preferred features like a loud alarm or the ability to contact security agents. Contrarily, one participant desired a robot that normally roamed the house for cleaning but could deploy a weapon if the security sensors detected abnormal situations [87].
Key takeaways:
(1)
The impact of security robot gender was mixed: some studies found no significant differences, while others indicated that male robots are viewed more favorably.
(2)
Preference for human-like robots is higher for socially demanding tasks, suggesting that the role and social context are crucial in determining the effectiveness of anthropomorphic design.
(3)
The expressed social intent of security robots significantly influences perceptions of integrity and benevolence.
(4)
Autonomy in security robots may not reduce trust outright; it raises significant safety and ethical concerns that must be addressed.
(5)
People show higher trust in highly reliable security robots than less reliable ones, trust introverted security robots over extraverted ones, and prefer those that use less forceful defense approaches.

3.6.3 Thrust Area 3: The Contextual Factors in Human–Security Robot Interaction.

In this review, 38.3% of studies directly looked into the interaction and contextual factors, including robot tasks, security agent types, interaction contexts and usage, and cultural backgrounds. These studies focused on perceptional robot acceptance (13 studies), behavioral robot acceptance (6 studies), and perceptions of robots (6 studies).
Perception of Robots. Six studies investigated whether contextual factors influence perceptions of security robots. Two studies found that national cultural differences can be important [48, 53]. National culture significantly influences people’s overall attitudes regarding security robots’ use of weapons [48]. Specifically, Chinese individuals living in China displayed a more approving attitude toward weapons than Americans residing in China. Another study examined national culture by adopting a generative design methodology and conducting semi-structured interviews [53]. This study found that participants from Korea and the United States had different perceptions of how security robots should be used. US participants expected security robots to be part of a house security system and were fine with allowing them to use a weapon. Korean participants only expected robots to perform security tasks for children rather than the house. Also, Korean participants expected the appearance of the security robot to be more friendly for guarding children, while US participants expected security robots to be more threatening and machine-like.
The impact of agent type (robot vs. human) was examined in two studies and yielded mixed results [20, 41]. One study found that a security task performed by a robot was perceived as significantly more intentional, less surprising, and more desirable than when performed by a human security officer [20]. Conversely, another study found no differences between human security officers and security robots regarding intimidation, fairness, and friendliness [41]. Another study [93] examined the influence of robot tasks and found that there is no significant difference in participants’ attitudes between robots performing security tasks or guidance tasks but that people would perceive significantly higher attribution of masculine traits to the security robot and higher attribution of feminine traits to the guidance robot. Finally, one study [108] examined the impact of interaction scenarios and found no significant differences in the perceived likability, intelligence, and safety of the robots between indoor and outdoor scenarios.
Key takeaways:
(1)
National culture significantly influences attitudes toward security robots, particularly regarding the use of weapons.
(2)
Mixed results were found concerning whether people’s perceptions of human security officers differ from those of security robots.
(3)
More masculine traits are attributed to security robots and more feminine traits to guidance robots.
(4)
There is little evidence that the context of interaction itself influences perceptions of security robots.
Robot Acceptance. Nineteen studies in thrust area 3 investigated the impact of contextual factors on people’s acceptance of robots, with six measuring behavioral acceptance and 13 measuring perceptual acceptance.
1. Behavioral Acceptance. The impact of contextual factors on behavioral robot acceptance was investigated in six studies. Two of them explored the impact of national culture and found that it affects human engagement and compliance [12, 54]. One study investigated the influence of national culture on participant engagement with robots based on task sociability ranging from low sociability (security guard) and middle sociability (tour guide, entertainment), to high sociability (teaching) [54]. Interestingly, Chinese and Korean participants demonstrated greater engagement with security robots when compared with German participants, although all participants engaged more with teaching robots and less with the security robot. The authors attributed these potential national cultural disparities to the collectivist nature of the Chinese and Koreans, which encourages them to be more receptive to communication and suggestions by either type of robot when compared to the Germans’ individualistic culture. Another study explored the influence of national culture on participants’ compliance rates with peacekeeping security robots [12]. Notably, Chinese participants residing in the United States exhibited the highest compliance with the robot, whereas Americans living in China displayed the lowest. The authors attributed this to China’s stringent weapon regulations and US police shootings.
Three studies investigated the influence of agent type (human or robot) and robot task, and all found that human security officers are associated with better human performance [48, 59, 94]. In one study, participants were more inclined to notice and interact with human guards than security robots, with the latter often being ignored [94]. Surprisingly, participants maintained a greater distance from human security guards than robots; however, this closer proximity to robots may stem from the need to engage with their screens or from curiosity toward the robots. This study also found that humans are more likely to notice and interact with robots in security than guidance mode. Similarly, another study found that participant engagement was significantly higher when interacting with real human guards compared to security robots [59]. These researchers also noted that participant engagement was significantly higher when security robots performed security protocol tasks (such as checking IDs) rather than greeting protocols. Another study explored the impact of robot tasks. This study found that participants’ compliance rates were significantly higher when robots requested them to relinquish a non-weapon item compared to weapons [48]. Another study tested a counter-trivialization strategy for robots that applied a dissonance reduction strategy in the robot’s communication sentences [81]. Unlike baseline robots, which repeat an admonishing sentence twice, robots using this counter-trivialization approach utilize a specific counter-trivialization message. Study results showed that this strategy significantly increases people’s compliance with the robot’s instructions.
Key takeaways:
(1)
National culture significantly influences human engagement and compliance with security robots.
(2)
People are likelier to notice, interact with, and maintain greater distances from human security guards than robot security guards.
(3)
Engagement with security robots varies based on the task performed by the security robot.
(4)
Implementing a counter-trivialization strategy in a robot’s communication significantly increases human compliance.
2. Perceptional Acceptance. In thrust area 3, 13 studies investigated perceptional robot acceptance outcomes. National culture was the most-explored topic, with six studies, and the majority reported significant effects [11, 16, 47, 58, 78, 110]. Mixed results were found for three studies exploring the impact on trust. One study tested the effect of participants’ country of residence on trust in the robotic peacekeeper and their interpersonal trust [58]. Interpersonal trust measures the trust between humans, while trust in the robotic peacekeeper is measured along three dimensions—purpose, process, and performance—and is focused on the trust between humans and robots. The results indicated that Americans residing in Japan exhibit significantly higher interpersonal trust than Americans living in the United States and China, possibly due to the high value placed on in-group trust in Japan [58]. In contrast, regarding trust in robotic peacekeepers, Americans living in the United States displayed significantly higher trust than those residing in China, particularly in the performance and process dimensions. Furthermore, Americans in the United States showed significantly more trust in robotic peacekeepers than Americans living in Japan.
Another study examined peacekeeping robots and found a significant effect of culture [11]. Americans living in all countries tended to trust security robots more than other cultural groups. According to the paper, this may be attributed to the widespread adoption of robotic technologies in American culture and industry and the individualistic tendencies in the United States, which might lead US participants to be more accepting of robotic technologies. Contrarily, the Japanese living in the United States have the lowest trust in security robots. The author pointed out a conjectural interpretation suggesting that Japanese participants view robots as more valuable as social partners than as actors in law enforcement [11]. However, another study looked at the influence of race and did not find significant differences in occupational competency and trust [78].
Three other studies explored the impact of culture on robot acceptance and preference, yielding mixed results [16, 47, 110]. One study investigated the impact of culture and found differences in the preferred identity of the robot defender [16]. Japanese participants favored human defenders over robot defenders, whereas US participants showed no significant preference for human over robot defenders. The study also revealed that US participants were more accepting of the idea of robots using lethal force compared to their Japanese counterparts, who preferred non-lethal and blocking defense strategies over other types of self-defense. However, the study did not find a difference in acceptance of robot defenders between United States and Japanese participants. Another study gathered people’s attitudes toward AI robots using a survey [47] and found that Japanese university students are significantly more concerned about security robots than their Taiwanese counterparts and agree that robots should have security functions. Finally, one study surveyed people’s needs for home service robots [110]. The results indicated that Taiwanese and Japanese participants believe home service robots should also function as security robots.
Five studies in thrust area 3 examined the influence of agent type by comparing security robots to human security officers, with three indicating greater acceptance for humans over robots [24, 78, 86] and two indicating preference for robots over humans [3, 25]. One study found that participants preferred hiring human guards over security robots [78]. Another study reported that participants found it more acceptable for human security officers to use non-lethal force for human protection compared to humanoid robots [24]. Additionally, the same study revealed that participants found it more acceptable for a humanoid robot to use force than an autonomous vehicle. Likewise, another study examined law enforcement officers’ trust in drones for surveillance operations and revealed a preference for human oversight rather than autonomous drone operation [86].
The other two studies found a preference for robots over humans [3, 25]. A study was conducted to understand the acceptance of a retail service robot by customers and retail service workers [25]. The retail service robot was capable of providing friendly guidance and admonishing inappropriate customer behaviors. Most customers preferred the robot over the human when it came to begin admonished. Likewise, retail service workers preferred using the robot for admonishing customers because it was less offensive and easier for a robot to perform. Another study investigated students’ attitudes toward library robots tasked with providing directions, ensuring security, answering users’ questions, and monitoring activities [3]. The results indicated that students viewed robots as more suitable for security and monitoring than librarians.
Three studies examined the impact of context on the acceptance of security robots and found mixed results [57, 63, 108]. One study explored contextual factors and discovered a strong interaction between danger cues and robot decisions [57]. In the study, scenes were categorized into low, medium, and high danger levels, with the robot tasked to assess the danger of each scene. Participants exhibited greater trust in robots when their decisions aligned with the perceived danger level of the scene, such as labeling low-danger scenes as safe and high-danger scenes as threatening. Similarly, another study identified barriers to security robot acceptance, which were highly context-dependent [63]. Participants saw clear benefits in deploying security robots in high-security areas but were uneasy about their use in low-risk locations. Additionally, acceptance varied between day and night; participants believed robots could enhance safety for children during the day but were concerned about potential negative impacts on the unhoused population at night. Acceptance was also found to be “use-dependent” in that participants were amenable to the technology used for deterring bad behavior but opposed the collection of identifiable data that could lead to personal tracking and monitoring. Some participants also expressed fears about biased decision-making by these robots, such as the potential for unjust arrests of people of color due to surveillance [63]. In contrast to these findings, another study examined the influence of interaction scenarios by comparing an indoor hallway scenario with an outdoor parking lot, revealing no significant differences in people’s trust, trustworthiness, and desire to use security robots [108].
Key takeaways:
(1)
National culture significantly impacts the trust and acceptance of security robots.
(2)
Preferences between human security personnel and security robots varies depending on the context and role.
(3)
Contextual factors associated with different risk levels significantly influence the acceptance of security robots.

4 Discussions and Opportunities

Despite the importance of security in human–robot interaction (HRI) and the efforts of many scholars, there are several major gaps in understanding. Next, we present several notable main research opportunities based on these gaps. These include gaps in each thrust area, research methods, and outcomes. Finally, we present a future research agenda.

4.1 Opportunities in Thrust Area 1: Human Factors

4.1.1 Age.

According to our results, most studies have an average age between 20 and 40 years old, with less attention paid to adults older than 50. However, one important application area for security robots would be the home security robot for older adults [6, 44]. Older adults are found to be less informed about and have less confidence in security robots as a potential technology [83]. This challenge could hinder the widespread adoption of security robots, underscoring the need for focused research on older adults’ willingness to embrace security robots.
Potential research questions:
How do different age groups perceive the effectiveness of security robots in ensuring their safety?
What are the specific usability challenges elderly individuals face when interacting with security robots?
How do interactions with security robots affect the sense of security and anxiety levels across different age groups?
What specific concerns do vulnerable populations (e.g., elderly, disabled) have regarding security robots, and how can these be addressed?

4.1.2 Gender.

Human gender is one factor that has been widely assessed across papers in this review, although mixed results have been found. Most significant results go in the same direction, indicating a higher preference among women for general security robots [34, 111]. However, Wang [103] studied people’s adoption of domestic robots that perform both housework and security tasks and found that women view the adoption of domestic robots as riskier than men, indicating complex gender attitudes toward different types of robots. Therefore, future researchers need to consider the role of specific usage scenarios on gender effects and separate them.
Additionally, given that most studies in this review report significant gender-based disparities in perceptions of security robots and their acceptance, there is a crucial need for gender balance in security robot studies to prevent skewed findings and biased results caused by gender imbalances. Unfortunately, as shown in Figure 6, many studies in this review have uneven gender distributions [54, 86, 110]. Future studies should pay more attention to potential gender imbalances during participant recruitment.
Potential research questions:
Do different gender groups experience different emotional responses (e.g., feelings of safety, intimidation, anxiety) when encountering security robots? Is there a difference in their behavioral responses to instructions or commands issued by security robots?
What adjustments can be made to the interface of security robots to accommodate different gender-based interaction preferences?

4.1.3 Individual Factors.

Apart from age and gender, only a few individual-level factors have been studied in the current literature on human–security robot interaction, including education levels, technology experience, and personality traits. However, previous HRI research has suggested that a wide range of individual factors have an important influence on HRI, such as income levels, previous robot experience, and interest in science and technology, among others [30, 74, 112]. Therefore, future studies should consider these additional individual factors and further explore their effects. At the same time, researchers could examine some security-robot-related individual factors, such as attitudes toward law enforcement and expectations toward security technologies.

4.2 Opportunities in Thrust Area 2: Robot Factors

4.2.1 Physical Appearance.

The current review uncovered ambiguity regarding the relationship between anthropomorphism and security robot acceptance. Specifically, three studies in this review looked into the effect of anthropomorphism on robot acceptance and found conflicting results. One study found that lower anthropomorphic robot designs are associated with higher acceptance [35], whereas another study found that higher anthropomorphic robot designs lead to increased acceptance [108]. Conversely, a different study found no significant difference [54].
One explanation for the inconsistencies is the lack of a standard measure of anthropomorphism used across studies. For example, upon closer examination of the images presented in their respective papers, we noticed that Li et al. [54] employed a robot that, despite being labeled as “human-like,” featured a mechanical appearance, constructed from Legos, and lacked crucial facial elements like a nose or mouth. In contrast, Goetz et al. [35] utilized a “machine-like” security robot with a more human-like appearance, characterized by a rounded head, complete facial features, and expressions. In essence, translating findings across studies to discern the impact of anthropomorphism on acceptance would be difficult. Therefore, there is a need for future research to adopt a more standard measure of anthropomorphism. Future studies on anthropomorphism could select robots based on the existing anthropomorphic robot database [73] and report the relevant anthropomorphism scores to enhance consistency and the generalizability of the results.
At the same time, more studies should be conducted to explore other design outlooks of security robots, not limited to anthropomorphism. In the context of security robots, people’s interaction with robots may be occasional and infrequent, making it crucial to form positive initial trust [41]. The appearance of the security robot, in this case, plays an important role in forming initial trust. It establishes social expectations [21] and further affects people’s interaction with the robot. Yet, only two studies were found to have explored other outlook designs such as logos and size. [13, 56]. The lack of studies on appearance provides an urgent opportunity for researchers.
Potential research questions:
How does the physical appearance of security robots influence public perceptions of their effectiveness and authority? What design enhances compliance with robots’ instructions?
How does the appearance of a security robot affect the emotional response of individuals, such as feelings of safety or intimidation?

4.2.2 Personalities.

Across all studies in this review, only one study, Tay et al. [88], directly examined the effect of security robots’ personalities on acceptance and found that introverted security robots have higher perceived trust and higher acceptance compared to extraverted robots. However, several reviews found that humans prefer more extraverted robots [75, 76]. This contradiction highlights the importance of examining robot personalities for security robots. Further research is needed to identify which robot personalities are paramount in fostering the acceptance of security robots. This research may or may not confirm previous studies on robot personalities [28].
Potential research questions:
What personality traits in security robots most effectively enhance public trust and perceived reliability?
Are there specific personality traits in security robots that reduce stress or fear during interactions?
What personality traits in security robots lead to higher rates of compliance and cooperation in emergencies?

4.2.3 Cybersecurity.

Cybersecurity is a topic within human–security robot interaction that warrants further research attention. Cyber safety refers to the potential attacks on robotic systems that could have safety ramifications [80], and it has been a significant concern in robotics [79]. Considering the potential for security robots to be equipped with force or weaponry, the problems associated with hacking or hijacking are a bigger concern for safety when compared to many other types of robots. Only one study in the review addressed this concern [63]. The researchers discovered that the public’s foremost concern revolved around the possibility of security robots being hacked or hijacked. Marcu et al. [63] clearly identified an important yet understudied area of research on human–security robot interactions. Future research on security robots should pay more attention to this issue, further investigating the unique influencing factors of security robots based on existing research on general robot cybersecurity.
Potential research questions:
How does the cybersecurity posture of a security robot influence human trust and willingness to interact with it?
How can secure and user-friendly authentication mechanisms be integrated into human interactions with security robots?
What are the potential trade-offs between enhanced security measures and the ease of interaction for users?
What are the implications of data collection by security robots on public privacy perceptions, and how can these be addressed?

4.3 Opportunities in Thrust Area 3: Contextual Factors

4.3.1 Security Tasks.

People’s perceptions and acceptance vary across different tasks. However, our review indicates that access control tasks were the most commonly used. These tasks typically involve brief communication, access verification, and responses to various verification outcomes. However, real-world security robots are used for various tasks such as patrolling, detecting strangers, automatically calling the police after noticing something abnormal, and physically counteracting an intruder [52, 66, 91]. Hence, future studies should explore and implement a broader range of structured security robot tasks to enhance our understanding of human–security robot interactions.
The tasks examined in many studies, including access control tasks, tend to be structured with clear relational hierarchies. However, real-world interactions are often less hierarchical and occur in unstructured or spontaneous interactions. In communities, police officers are commonly seen as approachable resources, with people feeling comfortable interacting with them in informal ways. For instance, individuals may randomly seek assistance or guidance from police officers on various matters. Hence, there is a need for research that delves into more spontaneous and unstructured interactions with security robots to expand our understanding in this area.
Potential research questions:
What metrics should be used to evaluate the effectiveness of human–robot collaboration in different security tasks? How can these metrics be applied to continuously improve task performance and outcomes?
Which specific security tasks (e.g., patrolling, surveillance, threat detection) are most effectively performed by robots, and which require human involvement? How can a task allocation framework be developed to dynamically assign tasks between humans and robots based on real-time needs and capabilities?

4.3.2 Application Domains and Interaction Scenarios.

The domain application is likely to influence humans’ acceptance of security robots. For example, Marcu et al. [63] found that people’s acceptance of a security robot depends on when and where it is deployed. Therefore, acceptance of the same security robot may vary completely by the when and where it is deployed. This could explain many of the inconsistent results our review uncovered. However, we do not believe this can be solved entirely by more studies in different contexts alone. Instead, theories are needed to explain why factors associated with the when and where might influence the acceptance of security robots. These theories and their validation are likely to provide us with more generalizable knowledge that can span diverse situational contexts.
Potential research questions:
How do different physical settings (e.g., crowded vs. sparsely populated areas) and people’s perceived risk of those contexts affect public acceptance and perception of security robots?
In which types of environments (e.g., urban areas, residential neighborhoods, commercial spaces, airports) are security robots most likely to be accepted by the public?
What are the optimal times of day or types of events during which deploying security robots would maximize public acceptance (e.g., during scheduled events, peak hours, night-time patrols)?

4.3.3 Preference for Humans over Security Robots.

Although most of the studies in this review found that humans prefer human security guards over security robots [24, 48, 59, 78, 86, 94], some studies found a preference for robots over human security guards [3, 20, 25]. One approach to promoting the acceptance of security robots is to assign them roles and tasks that the public feels are appropriate. Therefore, future research is needed to investigate not only when humans prefer security robots over humans but, more important, why.
Potential research questions:
How do emotional connections and empathy influence the preference for human security personnel over robots?
What are the psychological impacts of having human security personnel versus robots in various security scenarios?
How does the effectiveness of communication influence preferences for human security personnel over robots?
In what specific scenarios or contexts do people prefer human security personnel over robots (e.g., conflict resolution, emergency response)?

4.3.4 Culture Factors.

Although culture has been shown to be important in security robot acceptance, there appears to be a notable gap in research concerning various cultural regions. In this review, the United States, China, Japan, and Korea are the four most commonly studied countries in examining the impact of culture. For example, depending on the different culture and social setting [49], individualism/collectivism has different effects on the acceptance of security robots. However, there appears to be a notable gap in research concerning other cultural regions. This gap may be attributed to our restriction of the literature to English-language publications. Nonetheless, future research is likely needed to explore cultural differences from other regions such as South America or Africa, among others.
Potential research questions:
How do cultural norms and values regarding technology and innovation influence the acceptance of security robots?
How do varying levels of trust in government and institutions across cultures impact the acceptance of security robots?
What cultural differences exist in the willingness to trade privacy for security, and how do these differences influence the deployment of security robots?
How can security robots be designed to adapt to culturally specific behaviors and customs to enhance acceptance?

4.3.5 Social, Political, and Economic Factors.

The studies addressing cultural influences often overlooked the need to differentiate these influences from interconnected factors like political contexts, economic conditions, social landscapes, and governmental policies. However, each of these elements holds intrinsic significance worthy of individual examination. For example, politically, the landscape varies significantly across countries, influencing public policy and attitudes toward security technologies [92]. Political ideologies, along with government policies on surveillance and privacy, could heavily impact public perceptions and acceptance of security robots. Social and economic factors further complicate the landscape. For example, the Global South encompasses regions from both Asian and African countries. In conclusion, it is imperative to disentangle culture from other variables to gain a more nuanced understanding of its effects and those of related factors in understanding human–security robot interactions.
Potential research questions:
What role do societal norms and values play in shaping human interactions with security robots?
What regulatory frameworks are necessary to ensure the ethical use of security robots?
How does the introduction of security robots affect job markets for human security personnel?
How do media portrayals of security robots affect public preferences?

4.3.6 Legal Aspects.

Regulation and legal frameworks for security robots are other areas that require attention and could influence the adoption of security robot technology. Although security robots have been widely adopted across various countries [38, 65, 102], there are little to no corresponding regulations or laws governing them across those countries [1, 33, 43, 69]. Scholars have emphasized the urgent need to address legal challenges, including establishing regulatory oversight, developing a system for allocating liability, and addressing privacy and data protection concerns [7, 43, 45, 105]. For example, Joh [45] highlighted the critical need for legal frameworks to one day assist courts in cases involving deaths caused by security robots and suggested discussing the legality of whether security robots could make self-defense decisions on behalf of their owners. Isaacs et al. [43] proposed extending current laws applicable to officers to robots to establish binding regulations and oversight while also considering whether such rules apply to robots. One paper in this review, by Marcu et al. [63], also highlighted people’s concerns regarding individual privacy during robots’ passive data collection.
In the United States, while there is no comprehensive federal or state law specifically governing security robots, their use is influenced by various existing laws. State and federal privacy laws, particularly concerning data collection and surveillance, significantly impact security robots, especially regarding video or audio recording. For example, many state-specific regulations could be applicable to regulate security robots. The California Consumer Privacy Act governs personal data collection, storage, and use, granting consumers rights like accessing, deleting, or opting out of data sharing [85]. These provisions are particularly relevant to security robots gathering personal data, ensuring they adhere to privacy protections. The Illinois Biometric Information Privacy Act regulates biometric data collection, such as facial recognition, applicable to security robots equipped with such technologies [39]. Cities have begun implementing ordinances to regulate the use of drones and robots, with some specifically addressing security robots in public spaces. Notably, in 2019 San Francisco banned city agencies, including law enforcement, from using facial recognition technology [15]. This ban highlights broader concerns about surveillance and the ethical considerations of deploying security robots with advanced monitoring capabilities. Similarly, cities like Somerville and Boston in Massachusetts have banned facial recognition, potentially influencing security robot deployment and capabilities in those areas [2].
At the federal level, the White House recently introduced the Blueprint for an AI Bill of Rights [90]. This blueprint identified five principles (effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration, and fallback) to guide the design, use, and deployment of automated systems. Each principle has the potential to impact the use of security robots directly. The Wiretap Act and the Electronic Communications Privacy Act (ECPA) govern the interception and recording of communications [98], which is relevant to security robots that capture audio or video data. Workplace safety regulations, such as those from the Occupational Safety and Health Administration [100], apply to ensure the safe operation of robots in work environments. Additionally, security robots must comply with the Americans with Disabilities Act to avoid interfering with accessibility [99]. If security robots use wireless communication, they must adhere to Federal Communications Commission regulations [32]. Liability laws also hold companies accountable for any damages or injuries caused by security robots.
Finally, although no international law specifically governs security robots, there is a growing recognition of the need for guidelines to manage their use responsibly. As technology advances, international dialogue and cooperation are expected to increase to address the challenges posed by the widespread use of security robots. Currently, several international frameworks and agreements indirectly impact the regulation of security robots:
Human rights instruments such as the Universal Declaration of Human Rights [96] and the International Covenant on Civil and Political Rights [97] emphasize privacy and personal freedom, thereby indirectly impacting the use of security robots, particularly in surveillance contexts.
In terms of data protection, the European Union’s General Data Protection Regulation [95] serves as a global benchmark, impacting how security robots handle data across borders.
Ongoing United Nations discussions on AI ethics, the OECD AI Principles, and efforts by organizations like the International Organization for Standardization [42] and IEEE are shaping ethical guidelines and standards for robotic systems.
Enhancing the regulatory and legal framework for security robots is crucial for expanding their future deployment scope. A comprehensive exploration of these boundary conditions is essential for enabling security robots to undertake a broader range of tasks and fulfill their primary safety-ensuring purpose more effectively. Future research should prioritize investigating the legal aspects of security robot deployment and further examining public concerns and opinions.
Potential research questions:
How do local, national, and international policies influence the deployment and acceptance of security robots?
How do existing legal and regulatory frameworks shape public preferences for security robots? What changes in regulations could affect public acceptance and preferences?
What ethical deployment strategies can be developed to address public concerns and potentially increase acceptance of robots?

4.4 Opportunities Across All Thrust Areas

4.4.1 Critical or Social Justice Views.

Empirical studies taking a critical or social justice perspective on the deployment of security robots need to be increased, despite the growing public discourse about security robots’ potential for supporting biased policing practices [63]. These issues can be highlighted by position papers focusing on the impact of security robots on the Black Lives Matter movement and their use to legitimize unethical policing practices [7, 105].
Future research should leverage a more critical or social justice view to help identify and address the public concern with human and security robot interactions. These issues are not likely to be easy to address. For instance, Marcu et al. [63] underscored the paradox inherent in public concerns regarding security robots. In their study, several female participants highlighted the benefits of security robots such as protection from criminals and also from male police officers who may grope them. However, these women also expressed concern about security robots being used to target Black people in White neighborhoods. These studies may also employ community-based research methods, which can offer deeper insights by engaging directly with diverse groups such as minority and low-income groups, ensuring a broad range of perspectives is considered [17, 63].
Potential research questions:
How do security robots influence power dynamics between authorities and the public, particularly in marginalized communities?
How do automated decision-making processes in security robots affect the agency of individuals, especially those from vulnerable or marginalized groups?
What methods can be used to identify and mitigate biases in the deployment and operation of security robots?
How can community-based ethical guidelines be developed to govern the use of security robots?

4.4.2 Theoretical Framework.

This review proposes the Human–Security Robot Integrative Research Model, which provides a potential research framework for future security robot researchers. Limited topics have been examined in the area of human–security robot interaction, and various potential topics should be explored in the context of security robots in each thrust area. Future researchers could refer to our model to systematically or selectively investigate factors in each thrust area.
There is also a need to identify relevant theories that can be used to guide our understanding of human–security robot interactions. Ultimately, valuable research inherently contributes to theory. For example, recent work recommends the use of power based on French and Raven’s Framework to understand human interactions with robots [37]. Future research could explore these and other such frameworks to determine whether they are relevant for the domain of security robots. We list theoretical frameworks at the end of this section that could guide future research exploring interactions with security robots.
A future systematic review could specifically focus on theoretical papers in this area, examining the employed theoretical frameworks. This would provide researchers with a clearer overview of current theoretical frameworks, advancing the field and enhancing the generalizability of existing results. Overall, such a review could help identify important theories and enrich our comprehension of human–security robot interactions.
Potential theoretical frameworks:
Technology acceptance model (TAM) [19]: TAM is one of the most widely adopted frameworks that explores factors influencing user acceptance of new technologies. The model can be used to assess how users perceive the benefits and ease of interacting with security robots and how these perceptions influence such robots’ acceptance and use.
Unified theory of acceptance and use of technology (UTAUT) [101]: UTAUT expands upon TAM by incorporating additional constructs such as social influence, which could be useful for understanding how social pressures, available resources, and individual intentions affect the acceptance and use of security robots.
Autonomy acceptance model (AAM) [107]: AAM expands upon TAM by incorporating autonomy and risks, helping to understand how a security robot’s autonomy impacts its acceptance.
Diffusion of innovation (DOI) Theory [77]: DOI theory explains how, why, and at what rate new technologies spread through cultures. It can help analyze how security robots are adopted in different communities and what factors contribute to faster or slower adoption.
Social Presence Theory [84]: Social presence refers to the feeling of being with another in a communication medium. Higher social presence can enhance communication and interaction quality. This framework can be used to explore how the design and behavior of security robots affect users’ feelings of social presence and their comfort level during interactions.

4.4.3 Research Settings.

According to our results, one-fourth of the studies in this review utilized images or videos to create participants’ interactions with security robots, while another fourth measured responses through questionnaire items without robot videos or images. However, such indirect interactions could make it difficult to assess how security robot features impact users’ actual acceptance of them. This limitation also constrains the ecological validity of the study results because people’s real interactions with security robots usually occur in field settings. Hence, we advocate for future research to not only promote more experimental inquiries but also to engage in more field studies using physical robots, thereby yielding results that are more ecologically valid.

4.4.4 Behavioral and Perceptional Acceptance Measures.

Based on this review, behavioral and perceptual measures have been used to measure acceptance. However, in at least one area their results have been different. For example, culture is found to have a significant influence on all behavioral acceptance measures studied [12, 54], whereas culture yields mixed results when using perceptual acceptance measures [11, 16, 47, 58, 78, 110]. The cause of these differences remains unclear because it is uncertain whether they solely stem from one set of studies focusing on behavioral measures while others concentrate on perceptual aspects of acceptance. Future research could employ both measures to confirm or refute any potential differences associated with these measures. The identification of such differences would help researchers better design studies and interpret existing studies.

4.4.5 Study Transparency.

Future researchers should increase the transparency of their study methods to improve the generalizability and reproducibility of their results. This review has uncovered many mixed or conflicted findings across studies. These mixed or conflicted findings could stem from differences in experimental equipment, scene setups, robot types, and questionnaire designs used in the experiments. A clear reporting of the study measures and experimental settings could facilitate comparisons among studies [11]. Moreover, enhancing transparency enables experiments to be reproduced and results to be validated [36]. We strongly advocate for future researchers to provide detailed reports of their experimental settings, including the robot type utilized and questionnaire items employed. This transparency will bolster the replicability and enhance the generalizability of their findings.

5 Conclusion

Security robots serve a unique role and are becoming more and more important to our society. It is important to understand the interaction between humans and security robots. To understand what we know and what we should know about human–security robot interaction, our review identified and analyzed 47 studies from 4,116 articles. It proposed three thrust areas following a previous framework, identified three main outcome areas, and summarized what has been found in this field. The article also pointed out current research gaps and provided potential guidance for future security robot design and human–security robot interaction studies.

References

[1]
State Council of the People’s Republic of China. 2021. 14th Five-Year Plan for Robotics Industry Development. Technical Report. State Council of the People’s Republic of China, Beijing, China.
[2]
ACLU of Massachusetts. 2019. Somerville Becomes First East Coast City to Ban Government Use of Face Recognition Technology: Massachusetts City Joins Growing Nationwide Movement to Bring the Technology Under Democratic Control. Retrieved September 19, 2024 from https://www.aclu.org/press-releases/somerville-becomes-first-east-coast-city-ban-government-use-face-recognition
[3]
Adebowale Adetayo, Kabiru Abwage, and Tolulope Oduola. 2023. Robots and human librarians for delivering library services to patrons. The Reference Librarian (2023), 1–16.
[4]
Siddharth Agrawal and Mary-Anne Williams. 2017. Robot authority and human obedience: A study of human behaviour using a robot security guard. In Proceedings of the Companion of the ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, 57–58.
[5]
Siddharth Agrawal and Mary-Anne Williams. 2018. Would you obey an aggressive robot: A human-robot interaction field study. In Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, New York, NY, 240–246.
[6]
Neziha Akalin, Annica Kristoffersson, and Amy Loutfi. 2019. The influence of feedback type in robot-assisted training. Multimodal Technologies and Interaction 3, 4 (2019), 67.
[7]
Peter Asaro. 2016. Will# BlackLivesMatter to Robocop. In Proceedings of the International Conference on WeRobot: Conference on Legal and Policy Issues Relating to Robotics. U. Miami School of Law, 1–2.
[8]
Danilo Avola, Gian Luca Foresti, Luigi Cinque, Cristiano Massaroni, Gabriele Vitale, and Luca Lombardi. 2016. A multipurpose autonomous robot for target recognition in unknown environments. In Proceedings of the IEEE 14th International Conference on Industrial Informatics (INDIN). IEEE, New York, NY, 766–771.
[9]
Icek Azjen. 1980. Understanding Attitudes and Predicting Social Behavior. Pearson, Englewood Cliffs.
[10]
Christoph Bartneck, Dana Kulić, Elizabeth Croft, and Susana Zoghbi. 2009. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1, 1 (2009), 71–81.
[11]
James P. Bliss, Qin Gao, Xiaoxiao Hu, Makoto Itoh, Nicole Karpinsky-Mosely, Shelby K. Long, Yiannis Papelis, and Yusuke Yamani. 2021. Cross-cultural trust of robot peacekeepers as a function of dialog, appearance, responsibilities, and onboard weapons. In Trust in Human-Robot Interaction. Elsevier, 493–513.
[12]
James P. Bliss, Shelby K. Long, and Nicole Karpinsky-Mosley. 2019. Cross-cultural reactions to peacekeeping robots wielding non-lethal weapons. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 63, Sage, 2292–2297.
[13]
Fareed Bordbar, Roya Salehzadeh, Christian Cousin, Darrin J. Griffin, and Nader Jalili. 2021. Analyzing human-robot trust in police work using a teleoperated communicative robot. In Proceedings of the 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). IEEE, New York, NY, 919–924.
[14]
De’Aira Bryant, Jason Borenstein, and Ayanna Howard. 2020. Why should we gender? The effect of robot gendering and occupational stereotypes on human trust and perceived competency. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, 13–21.
[15]
Kate Conger, Richard Fausset, and Serge F. Kovaleski. 2019. San Francisco Bans Facial Recognition Technology. The New York Times. Retrieved from https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html
[16]
Martin Cooney, Masahiro Shiomi, Eduardo Kochenborger Duarte, and Alexey Vinel. 2023. A broad view on robot self-defense: Rapid scoping review and cultural comparison. Robotics 12, 2 (2023), 43.
[17]
Sasha Costanza-Chock. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. The MIT Press.
[18]
Ruth A. David and Paul Nielsen. 2016. Defense Science Board Summer Study on Autonomy. Technical Report. Defense Science Board, Washington, DC.
[19]
Fred D. Davis. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly (1989), 319–340.
[20]
Maartje M.A. de Graaf and Bertram F. Malle. 2018. People’s judgments of human and robot behaviors: A robust set of behaviors and some discrepancies. In Proceedings of the Companion of the ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, 97–98.
[21]
Ewart J. De Visser, Samuel S. Monfort, Ryan McKendrick, Melissa A. B. Smith, Patrick E. McKnight, Frank Krueger, and Raja Parasuraman. 2016. Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied 22, 3 (2016), 331.
[22]
Dave DeNatale. 2023. Crocker Park Unveils New Security Guard: A Robot Named SAM. Retrieved October 26, 2023 from https://www.wkyc.com/article/news/local/cuyahoga-county/crocker-park-new-security-guard-robot-sam/95-ecd06a56-ec46-4ce5-b309-87496e162ebd
[23]
A. Walter Dorn. 2016. Smart Peacekeeping: Toward Tech-Enabled UN Operations. International Peace Institute, New York.
[24]
Eduardo Kochenborger Duarte, Masahiro Shiomi, Alexey Vinel, and Martin Cooney. 2022. Robot self-defense: Robots can use force on human attackers to defend victims. In Proceedings of the 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, New York, NY, 1606–1613.
[25]
Sachi Edirisinghe, Satoru Satake, and Takayuki Kanda. 2023. Field trial of a shopworker robot with friendly guidance and appropriate admonishments. ACM Trans. Hum.-Robot Int. 12, 3 (2023), 1–37.
[26]
Sibylle Enz, Martin Diruf, Caroline Spielhagen, Carsten Zoll, and Patricia A. Vargas. 2011. The social role of robots in the future—Explorative measurement of hopes and fears. Int. J. Soc. Robot. 3 (2011), 263–271.
[27]
Connor Esterwood, Kyle Essenmacher, Han Yang, Fanpan Zeng, and Lionel Peter Robert. 2021a. A meta-analysis of human personality and robot acceptance in human-robot interaction. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 1–18.
[28]
Connor Esterwood, Kyle Essenmacher, Han Yang, Fanpan Zeng, and Lionel P. Robert. 2022. A personable robot: Meta-analysis of robot personality and human acceptance. IEEE Robot. Autom. Lett. 7, 3 (2022), 6918–6925.
[29]
Connor Esterwood and Lionel P. Robert. 2020. Personality in healthcare human robot interaction (H–HRI): A literature review and brief critique. In Proceedings of the 8th International Conference on Human-Agent Interaction. ACM, New York, NY, 87–95.
[30]
Connor Esterwood, X. Jessie Yang, and Lionel P. Robert. 2021b. Barriers to AV bus acceptance: A national survey and research agenda. Int. J. Hum.-Comput. Int. 37, 15 (2021), 1391–1403.
[31]
Cyrus Farivar. 2021. Security Robots Expand across U.S., with Few Tangible Results. Retrieved September 13, 2022 from https://www.nbcnews.com/business/business-news/security-robots-expand-across-u-s-few-tangible-results-n1272421
[32]
Federal Communications Commission. 2024. FCC Website. Retrieved from https://www.fcc.gov
[33]
Barry Friedman, Farhang Heydari, Max Isaacs, and Katie Kinsey. 2022. Policing police tech: A soft law solution. Berkeley Tech. LJ 37 (2022), 701.
[34]
Darci Gallimore, Joseph B. Lyons, Thy Vo, Sean Mahoney, and Kevin T. Wynne. 2019. Trusting robocop: Gender-based effects on trust of an autonomous robot. Front. Psychol. 10 (2019), 482.
[35]
Jennifer Goetz, Sara Kiesler, and Aaron Powers. 2003. Matching robot appearance and behavior to tasks to improve human-robot cooperation. In Proceedings of the 12th IEEE International Workshop on Robot and Human Interactive Communication. IEEE, New York, 55–60.
[36]
Hatice Gunes, Frank Broz, Chris S. Crawford, Astrid Rosenthal-von der Pütten, Megan Strait, and Laurel Riek. 2022. Reproducibility in human-robot interaction: Furthering the science of HRI. Curr. Robot. Rep. 3, 4 (2022), 281–292.
[37]
Yoyo Tsung-Yu Hou, EunJeong Cheon, and Malte F Jung. 2024. Power in human-robot interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, 269–282.
[38]
Kristin Houser. 2019. China Deploys Its First Robot Traffic Police. Futurism. Retrieved from https://futurism.com/first-police-robots-traffic-china
[39]
Illinois General Assembly. 2008. Biometric Information Privacy Act. Biometric Information Privacy Act.
[40]
Ohad Inbar and Joachim Meyer. 2015. Manners matter: Trust in robotic peacekeepers. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 59, Sage, Los Angeles, CA, 185–189.
[41]
Ohad Inbar and Joachim Meyer. 2019. Politeness counts: Perceptions of peacekeeping robots. IEEE Trans. Hum.-Mach. Syst. 49, 3 (2019), 232–240.
[42]
International Organization for Standardization. 2024. ISO Standards. Geneva, Switzerland. Retrieved from https://www.iso.org/standards.html
[43]
Max Isaacs, Farhang Heydari, and Barry Friedman. 2023. Regulating police robots. In Proceedings of the International Conference on We Robot 2023.
[44]
C. Jayawardena, I. H. Kuo, U. Unger, A. Igic, R. Wong, C. I. Watson, R. Q. Stafford, E. Broadbent, P. Tiwari, J. Warren, et al. 2010. Deployment of a service robot to help older people. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, New York, NY, 5990–5995.
[45]
Elizabeth E. Joh. 2017. Private security robots, artificial intelligence, and deadly force. UCDL Rev. 51 (2017), 569.
[46]
Andrew Kambel. 2022. The Guilt Machine: Behavioral Confirmation in Moral Human-Robot Interactions. Master’s thesis. Utrecht University.
[47]
Hiroko Kanoh. 2017. Immediate response syndrome and acceptance of AI robots—Comparison between Japan and Taiwan. Proc. Comp. Sci. 112 (2017), 2486–2496.
[48]
Nicole D. Karpinsky, Shelby K. Long, and James P. Bliss. 2017. The relationship of the Penny Beliefs Weapons scale to robotic peacekeeper compliance and trust. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 61, Sage, 1580–1584.
[49]
Saad Khan. 2015. Towards Improving Human-Robot Interaction for Social Robots. Doctoral dissertation. University of Central Florida.
[50]
Wonjoon Kim, Nayoung Kim, Joseph B. Lyons, and Chang S. Nam. 2020. Factors affecting trust in high-vulnerability human-robot interaction contexts: A structural equation modelling approach. Appl. Ergon. 85 (2020), 103056.
[51]
Jennifer A. Kingson. 2023. Robots Are Your New Office Security Guard. Axios. Retrieved October 26, 2023 from https://www.axios.com/2023/03/03/security-robots-artificial-intelligence
[52]
KNIGHTSCOPE. 2024. Machine-as-a-Service. Retrieved February 6, 2024 from https://www.knightscope.com/who-we-serve
[53]
Hee Rin Lee, JaYoung Sung, Selma Šabanović, and Joenghye Han. 2012. Cultural design of domestic robots: A study of user expectations in Korea and the United States. In Proceedings of the 21st IEEE International Symposium on Robot and Human Interactive Communication. IEEE, New York, NY, 803–808.
[54]
Dingjun Li, Pei-Luen Rau, and Ye Li. 2010. A cross-cultural study: Effect of robot appearance and task. Int. J. Soc. Robot. 2, 2 (2010), 175–186.
[55]
Xueni Shirley Li, Sara Kim, Kimmy Wa Chan, and Ann L. McGill. 2023. Detrimental effects of anthropomorphism on the perceived physical safety of artificial agents in dangerous situations. Int. J. Res. Market. 40, 4 (2023), 841–864.
[56]
Ela Liberman-Pincu, Amit David, Vardit Sarne-Fleischmann, Yael Edan, and Tal Oron-Gilad. 2021. Comply with me: Using design manipulations to affect human–robot interaction in a COVID-19 officer robot use case. Multimodal Technol. Interact. 5, 11 (2021), 71.
[57]
Jinchao Lin, April Rose Panganiban, Gerald Matthews, Katey Gibbins, Emily Ankeney, Carlie See, Rachel Bailey, and Michael Long. 2022. Trust in the danger zone: Individual differences in confidence in robot threat assessments. Front. Psychol. (2022), 1426.
[58]
Shelby K. Long, Nicole D. Karpinsky, and James P. Bliss. 2017. Trust of simulated robotic peacekeepers among resident and expatriate Americans. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 61, Sage, 2091–2095.
[59]
Alexander Lopez, Renato Paredes, Diego Quiroz, Gabriele Trovato, and Francisco Cuellar. 2017. Robotman: A security robot for human-robot interaction. In Proceedings of the 18th International Conference on Advanced Robotics (ICAR). IEEE, New York, NY, 7–12.
[60]
Joseph B. Lyons, Sarah A. Jessup, and Thy Q. Vo. 2022. The role of decision authority and stated social intent as predictors of trust in autonomous robots. Top. Cogn. Sci. 16, 3 (Jan. 2022), 430–449.
[61]
Joseph B. Lyons, Chang S. Nam, Sarah A. Jessup, Thy Q. Vo, and Kevin T. Wynne. 2020. The role of individual differences as predictors of trust in autonomous security robots. In Proceedings of the IEEE International Conference on Human-Machine Systems (ICHMS). IEEE, New York, NY, 1–5.
[62]
Joseph B. Lyons, Thy Vo, Kevin T. Wynne, Sean Mahoney, Chang S. Nam, and Darci Gallimore. 2021. Trusting autonomous security robots: The role of reliability and stated social intent. Hum. Factors 63, 4 (2021), 603–618.
[63]
Gabriela Marcu, Iris Lin, Brandon Williams, Lionel P. Robert Jr., and Florian Schaub. 2023. “Would i feel more secure with a robot?”: Understanding perceptions of security robots in public spaces. Proc. ACM Hum.-Comput. Interact. 7, CSCW2 (2023), 1–34.
[64]
Gerald Matthews, Jinchao Lin, April Rose Panganiban, and Michael D. Long. 2019. Individual differences in trust in autonomous robots: Implications for transparency. IEEE Trans. Hum.-Mach. Syst. 50, 3 (2019), 234–244.
[65]
Jeffery C. Mays. 2023. 400-Pound N.Y.P.D. Robot Gets Tryout in Times Square Subway Station. The New York Times. Retrieved October 26, 2023 from https://www.nytimes.com/2023/09/22/nyregion/police-robot-times-square-nyc.html
[66]
M. R. McGuire. 2021. The laughing policebot: Automation and the end of policing. Polic. Soc. 31, 1 (2021), 20–36.
[67]
Kazuki Mizumaru, Satoru Satake, Takayuki Kanda, and Tetsuo Ono. 2019. Stop doing it! Approaching strategy for a robot to admonish pedestrians. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 449–457.
[68]
MordorIntelligence. 2024. Security Robots Market Size & Share Analysis - Growth Trends & Forecasts (2024–2029). Technical Report. Retrieved March 6, 2024 from https://www.mordorintelligence.com/industry-reports/security-robots-market
[69]
Chiara Gallese Nobile, Ildar Begishev, Maksim Zaloilo, Irina Filipova, Anna Zharova, and Elizaveta Gromova. 2023. Regulating smart robots and artificial intelligence in the European Union. J. Digit. Technol. Law 1, 1 (2023), 33–61.
[70]
Scott Ososky. 2013. Influence of Task-Role Mental Models on Human Interpretation of Robot Motion Behavior. Doctoral dissertation. University of Central Florida.
[71]
Mourad Ouzzani, Hossam Hammady, Zbys Fedorowicz, and Ahmed Elmagarmid. 2016. Rayyan—A web and mobile app for systematic reviews. Syst. Rev. 5, 1 (2016), 1–10.
[72]
Matthew J. Page, Joanne E. McKenzie, Patrick M. Bossuyt, Isabelle Boutron, Tammy C. Hoffmann, Cynthia D. Mulrow, Larissa Shamseer, Jennifer M. Tetzlaff, Elie A. Akl, Sue E. Brennan, et al. 2021. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Int. J. Surg. 88 (2021), 105906. 1743–9191
[73]
Elizabeth Phillips, Xuan Zhao, Daniel Ullman, and Bertram F. Malle. 2018. What is human-like? Decomposing robots’ human-like appearance using the anthropomorphic robot (ABOT) database. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, 105–113.
[74]
Natalia Reich and Friederike Eyssel. 2013. Attitudes towards service robots in domestic environments: The role of personality characteristics, individual interests, and demographic variables. Paladyn J. Behav. Robotics 4, 2 (2013), 123–130.
[75]
Lionel Robert. 2018. Personality in the human robot interaction literature: A review and brief critique. In Proceedings of the 24th Americas Conference on Information Systems, Personality in the Human Robot Interaction Literature: A Review and Brief Critique. ACM, New York, NY, 16–18.
[76]
Lionel P. Robert Jr., Rasha Alahmad, Connor Esterwood, Sangmi Kim, Sangseok You, and Qiaoning Zhang. 2020. A review of personality in human–robot interactions. Found. Trends Inf. Syst. 4, 2 (2020), 107–212.
[77]
Everett M. Rogers, Arvind Singhal, and Margaret M. Quinlan. 2014. Diffusion of innovations. In An Integrated Approach to Communication Theory and Research. Routledge, 432–448.
[78]
Kantwon Rogers, De’Aira Bryant, and Ayanna Howard. 2020. Robot gendering: Influences on trust, occupational competency, and preference of robot over human. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 1–7.
[79]
Antonio Roque and Suresh K. Damodaran. 2022. Explainable AI for security of human-interactive robots. Int. J. Hum.-Comput. Int. 38, 18–20 (2022), 1789–1807.
[80]
Antonio Roque, Melvin Lin, and Suresh Damodaran. 2021. Cybersafety analysis of a natural language user interface for a consumer robotic system. In European Symposium on Research in Computer Security. Springer, 107–121.
[81]
Sebastian Schneider, Yuyi Liu, Kanako Tomita, and Takayuki Kanda. 2022. Stop ignoring me! On fighting the trivialization of social robots in public spaces. ACM Trans. Hum.-Robot Int. 11, 2 (2022), 1–23.
[82]
Thomas M. Schnieders, Zhonglun Wang, Richard T. Stone, Gary Backous, and Erik Danford-Klein. 2019. The effect of human-robot interaction on trust, situational awareness, and performance in drone clearing operations. Int. J. Hum. Factors Ergon. 6, 2 (2019), 103–123.
[83]
Massimiliano Scopelliti, Maria Vittoria Giuliani, and Ferdinando Fornara. 2005. Robots in a domestic setting: A psychological approach. Univer. Access Inf. Soc. 4 (2005), 146–155.
[84]
John Short, Ederyn Williams, and Bruce Christie. 1976. The Social Psychology of Telecommunications. Wiley.
[85]
State of California Department of Justice. 2024. California Consumer Privacy Act (CCPA). Retrieved from https://oag.ca.gov/privacy/ccpa
[86]
Richard T. Stone, Thomas M. Schnieders, Kevin A. Push, Stephen Terry, Mary Truong, Inshira Seshie, and Kathryn Socha. 2019. Human-robot interaction with drones and drone swarms in law enforcement clearing operations. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 63, Sage, 1213–1217.
[87]
JaYoung Sung, Henrik I. Christensen, and Rebecca E. Grinter. 2009. Sketching the future: Assessing user needs for domestic robots. In Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, New York, NY, 153–158.
[88]
Benedict Tay, Younbo Jung, and Taezoon Park. 2014. When stereotypes meet robots: The double-edge sword of robot gender and personality in human–robot interaction. Comp. Hum. Behav. 38 (2014), 75–84.
[89]
Benedict Tiong Chee Tay, Taezoon Park, Younbo Jung, Yeow Kee Tan, and Alvin Hong Yee Wong. 2013. When stereotypes meet robots: The effect of gender stereotypes on people’s acceptance of a security robot. In Proceedings of the International Conference on Engineering Psychology and Cognitive Ergonomics. Springer, 261–270.
[90]
The White House. 2022. Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. The White House, Washington, DC. Retrieved from https://www.whitehouse.gov/ostp/ai-bill-of-rights/
[91]
Theodoros Theodoridis and Huosheng Hu. 2012. Toward intelligent security robots: A survey. IEEE Trans. Syst. Man Cybern. C Appl. Rev. 42, 6 (2012), 1219–1230.
[92]
Nik Thompson, Tanya McGill, Anna Bunn, and Rukshan Alexander. 2020. Cultural factors and the role of privacy concerns in acceptance of government surveillance. J. Assoc. Inf. Sci. Technol. 71, 9 (2020), 1129–1142.
[93]
Gabriele Trovato, Alexander Lopez, Renato Paredes, and Francisco Cuellar. 2017. Security and guidance: Two roles for a humanoid robot in an interaction experiment. In Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 230–235.
[94]
Gabriele Trovato, Alexander Lopez, Renato Paredes, Diego Quiroz, and Francisco Cuellar. 2019. Design and development of a security and guidance robot for employment in a mall. Int. J. Human. Robot. 16, 05 (2019), 1950027.
[95]
European Union. 2016. General Data Protection Regulation (GDPR). Retrieved September 16, 2024 from https://gdpr-info.eu
[96]
United Nations. 1948. Universal Declaration of Human Rights. United Nations, New York, NY. Retrieved from https://www.un.org/en/about-us/universal-declaration-of-human-rights
[97]
United Nations. 1966. International Covenant on Civil and Political Rights. Adopted by General Assembly resolution 2200A (XXI) on 16 December 1966. United Nations, New York, NY. Retrieved from https://www.ohchr.org/en/instrumentsmechanisms/instruments/international-covenant-civil-and-political-rights
[98]
United States Department of Justice. 1986. Electronic Communications Privacy Act of 1986 (ECPA). U.S. Department of Justice, Washington, DC. Retrieved from https://bja.ojp.gov/program/it/privacy-civil-liberties/authorities/statutes/1285
[99]
U.S. Department of Justice, Civil Rights Division. 2024. Americans with Disabilities Act Title III Regulations. U.S. Department of Justice, Civil Rights Division, Washington, DC. Retrieved from https://www.ada.gov/law-and-regs/regulations/title-iii-regulations/
[100]
U.S. Department of Labor. 2024. Occupational Safety and Health Administration website. Retrieved from https://www.osha.gov
[101]
Viswanath Venkatesh, Michael G. Morris, Gordon B. Davis, and Fred D. Davis. 2003. User acceptance of information technology: Toward a unified view. MIS Quarterly (2003), 425–478.
[102]
Mila Violet. 2024. Athena: India’s 1st AI-Powered Autonomous Security Robot. Medium.
[103]
Yan Wang. 2014. Gendering Human-Robot Interaction: Exploring How a Person’S Gender Impacts Attitudes toward and Interaction with Robots. Master’s Thesis. University of Manitoba.
[104]
Kyle Wiggers. 2017. Meet the 400-pound Robots that Will Soon Patrol Parking Lots, Offices, and Malls. DigitalTrends. (Updated Nov. 20, 2017). Retrieved from https://www.digitaltrends.com/cool-tech/knightscope-robots-interview/
[105]
Tom Williams and Kerstin Sophie Haring. 2023. No Justice, No Robots: From the Dispositions of Policing to an Abolitionist Robotics. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. ACM, New York, NY, 566–575.
[106]
Rosemarie E. Yagoda and Douglas J. Gillan. 2012. You want me to trust a ROBOT? The development of a human–robot interaction trust scale. Int. J. Soc. Robot. 4, 3 (2012), 235–248.
[107]
Xin Ye, Wonse Jo, Arsha Ali, Samia Cornelius Bhatti, Connor Esterwood, Hana Andargie Kassie, and Lionel Peter Robert. 2024. Autonomy Acceptance Model (AAM): The role of autonomy and risk in security robot acceptance. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, 840–849.
[108]
Xin Ye and Lionel P. Robert. 2023. Human security robot interaction and anthropomorphism: An examination of pepper, RAMSEE, and knightscope robots. In Proceedings of the 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, New York, NY, 982–987.
[109]
Sangseok You and Lionel Robert. 2018. Teaming up with robots: An IMOI (inputs-mediators-outputs-inputs) framework of human-robot teamwork. Int. J. Robotic Eng. 2, 3 (2018).
[110]
Hsiu-Ping Yueh and Weijane Lin. 2013. The interaction between human and the home service robot on a daily life cycle. In Proceedings of the Cross-Cultural Design. Cultural Differences in Everyday Life: 5th International Conference (CCD ’13). Springer, New York, NY, 175–181.
[111]
Hsiu-Ping Yueh and Weijane Lin. 2016. Services, appearances and psychological factors in intelligent home service robots. In Proceedings of the International Conference on Cross-Cultural Design. Springer, New York, NY, 608–615.
[112]
Qiaoning Zhang, X. Jessie Yang, and Lionel P. Robert Jr. 2022. Individual differences and expectations of automated vehicles. Int. J. Hum.-Comput. Int. 38, 9 (2022), 825–836.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Human-Robot Interaction
ACM Transactions on Human-Robot Interaction  Volume 14, Issue 2
June 2025
312 pages
EISSN:2573-9522
DOI:10.1145/3703049
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 December 2024
Online AM: 17 October 2024
Accepted: 10 October 2024
Revised: 21 September 2024
Received: 31 January 2023
Published in THRI Volume 14, Issue 2

Check for updates

Author Tags

  1. Robot
  2. Security
  3. Meta-Analysis/Literature Survey
  4. User Experience Design

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 741
    Total Downloads
  • Downloads (Last 12 months)741
  • Downloads (Last 6 weeks)467
Reflects downloads up to 26 Jan 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media