Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access August 27, 2021

Evaluating the use of human aware navigation in industrial robot arms

  • Matthew Story EMAIL logo , Cyril Jaksic , Sarah R. Fletcher , Philip Webb , Gilbert Tang and Jonathan Carberry

Abstract

Although the principles followed by modern standards for interaction between humans and robots follow the First Law of Robotics popularized in science fiction in the 1960s, the current standards regulating the interaction between humans and robots emphasize the importance of physical safety. However, they are less developed in another key dimension: psychological safety. As sales of industrial robots have been increasing over recent years, so has the frequency of human–robot interaction (HRI). The present article looks at the current safety guidelines for HRI in an industrial setting and assesses their suitability. This article then presents a means to improve current standards utilizing lessons learned from studies into human aware navigation (HAN), which has seen increasing use in mobile robotics. This article highlights limitations in current research, where the relationships established in mobile robotics have not been carried over to industrial robot arms. To understand this, it is necessary to focus less on how a robot arm avoids humans and more on how humans react when a robot is within the same space. Currently, the safety guidelines are behind the technological advance, however, with further studies aimed at understanding HRI and applying it to newly developed path finding and obstacle avoidance methods, science fiction can become science fact.

1 Introduction

“A robot may not injure a human being, or, through inaction, allow a human being to come to harm,” the First Law of Robotics put forward by Asimov [1] is a guiding principle within human–robot interaction (HRI). The interpretation of harm has primarily been interpreted as physical harm despite Asimov not limiting the definition to that within his short stories. For instance, in pp. 101–122, a robot gives people false information to prevent them from feeling upset to not break the first law. Furthermore, the short stories serve more as a warning to the pitfalls of setting such laws without flexibility for the inherent unpredictability of human behavior. Currently, technology does not allow for a robot to be able to read someone’s thoughts, but there is the technology and research available to understand how a robot’s actions may affect a person’s psychological well-being (discussed in further detail in Section 3). This is becoming more relevant as not only the number of robots within the industrial environment increases but also as the number of collaborative robots increases [2,3,4]. Initially designed to be used as tools for completing highly repetitive tasks [5], the range of applications for robots has expanded significantly from providing meals, laundry, and basic patient care in hospitals [6], to tour guides in museums [7,8]. High speed, high levels of repeatability, and continuous programming are all advantages that robots have over human workers. This is offset by the difficulty robots have adapted to dynamic changes in the working environment, which human workers can generally take in their stride. In ref. [9], it was highlighted that there are two elements of safety to be considered during HRI: physical safety (the action does not result in an injury for the human) and psychological safety (the action does not result in fear or surprise for the human). Research into navigation while maintaining psychological safety has been more limited in relation to industrial robot arms (stationary and typically high payload capabilities) than with mobile robotics (a robot capable with no fixed base, capable of moving in the environment). Although mobile robots have been deployed for use in HRI since the late 1990s [10] and the psychological effects of robotic behavior were becoming prominent in the early 2000s [11], it was not until the mid-2000s that these were combined to design a path planner that would incorporate them both. A Human Aware path planner is one that incorporates a person’s psychology into the path finding calculations [12]. Therefore, to assess the current state of Human Aware Navigation (HAN) in industrial robot arms, the following questions will be addressed:

  • How can the current HRI safety guidelines be optimized for maintaining the physical and psychological safety of the operator?

  • How can the approaches for HAN in mobile robotics be applied to an industrial robot arm?

To achieve this, first, a review of the current guidance for physical safety in the industry for robot arms was conducted, with the aim of highlighting the lack of psychological safety considerations. This was followed by a systematic literature review for HAN, where papers were searched on Google Scholar under the search terms “Human Aware Navigation” and “Human–Robot Interaction,” with the aim of providing answers to the questions posed earlier. The review included over 50 papers between 1998 and 2020, with papers only included if they involve HRI and the measure of a psychological variable as a result of the HRI.

2 Safety with barriers – the current state of safety in industrial robot arms

With an increasing prevalence of HRI in an industrial environment, more attention has been paid recently to dynamic obstacle avoidance in static base robot arms. This research differs from mobile robot navigation in that the dynamic obstacles are generally assumed to be a person, and therefore, the path planning and obstacle avoidance are designed with physical safety as the main priority and come with a more conservative approach [13]. The traditional method was the use of physical barriers to completely enclose the robot when it is in operation. However, this is beginning to change as shown by the guidance set in ISO 10218-2:2011 and ISO/TS 15066:2016 [14,15]. ISO 10218-2:2011 is the second part of the standards encompassed by ISO 10218, where the first part covers the “design and application of the particular robot integration” and the second part “provides guidelines for the safeguarding of personnel during integration, installation, functional testing, programming, operation, maintenance and repair.” Although the scope of ISO 10218 is industrial robots, ISO/TS 15066:2016 focuses on collaborative robotics by providing “guidance for collaborative robot operation where a robot system and people share the same workspace.” Following the guidance set in ISO/TS 15066:2016, when a human enters the workspace of the robot arm, one of the three following measures must take place for safe collaborative operation:

  • Safety-rated monitored stop, which involves the robot ceasing motion before the operator enters a preset collaborative workspace. If the robot is in motion and within the workspace when the operator enters, then the robot ceases motion, only continuing again once the operator has left the workspace.

  • Speed and separation monitoring, which involves the robot maintaining a protective separation distance, which can be reduced with reduced robot speed and/or by the robot executing a different path. If the distance is reduced to below a set value, then the robot completes a safety-rated monitored stop.

  • Power and force limiting, which involves reducing the levels of impact should a physical contact between the robot and the operator occur. The reduced impact can be achieved by increasing the contact surface area, mechanisms, and/or materials for absorbing the energy, extending energy transfer time, and limiting movement masses.

These can be categorized into two distinct strategies: precollision and postcollision. A precollision strategy aims to prevent a collision from happening (safety-rated monitored stop, speed, and separation monitoring), while a postcollision strategy aims to minimize the potential damage when a collision occurs (power and force limiting). Although not a collision avoidance/mitigation strategy, ISO/TS 15066:2016 also mentions hand guiding within the HRI guidelines, which involves the robot performing a safety-rated monitored stop followed by the operator maneuvering the end effector.

The precollision strategies operate as guidelines for path finding and obstacle avoidance algorithms although these have been an area of research well before the introduction of the guidelines. Algorithms for navigating the world have been studied and developed for nearly 80 years, with the A* [16,17] and the artificial potential field (APF) [18] algorithms coming to prominence in robotics and HAN (see Section 3). A* algorithm generates a cost of traveling to a point primarily on the distance (although other variables can be added based on the use such as obstacles), an APF algorithm generates repulsive and attractive fields around obstacles and goals, respectively. By reducing the calculation cost, APF algorithms are able to operate more efficiently in a 3D space and provide real-time obstacle avoidance [19,20,21]. Although the original APF algorithm suffered from local minima and Goal-Not Reachable Obstacle Nearby issues, iterations have shown that these issues can be overcome without completely rewriting the algorithm [22,23,24,25,26]. Both of these algorithms benefit from a relatively low complexity, allowing for further variables to be added to the cost function/repulsive fields.

The algorithm, however, is only one element required for an effective precollision strategy. For a robot to avoid an obstacle, it must also have a means to detect the obstacle. While an algorithm may be efficient with a high avoidance success rate in simulation, if the detection system is not adequate for the task, then the avoidance success rate will decrease [20]. Detection systems have seen significant technological advances, such as the development of increasingly complex on-board systems, especially machine vision systems. However, early robotic systems were reliant on laser distance scanners for interpreting the world around them, and many are still used, and more modern systems can use depth and color camera systems capable of relaying a significantly greater amount of information [27,28]. A laser scanner can provide a highly reactive and detailed image of the distance of an object from the robot, but it cannot be used to interpret human features, gestures, or emotions in the same way as an RGB-D camera. Furthermore, the increase of processing power allows for the detailed analysis of what a vision system is receiving. Machine learning has allowed for sophisticated algorithms to detect, track, and determine the pose a person is taking in real time. Early systems were struck with requiring constant calibration, a static environment, and markers for the person to be detected and track [29]. Not only were early systems unreliable but also costly. Recent additions, such as Microsoft’s Kinect, have made real-time, full-body tracking more feasible and have received attention for use in human–robot collaboration (HRC) [30,31,32,33].

Although the precollision strategies significantly reduce the physical harm, they do not fully mitigate the likelihood of an injury [34]. A person can still collide with inanimate objects, which means that the robot stopping when a person enters the workspace is not a guarantee of physical safety, and even with the robot’s speed and force being reduced the person can still come to harm as a result of their speed and force. As the maximum allowed speed for a robot during HRC is 250 mm/s, well below the speed a person can achieve, the robot’s ability to avoid a collision can be significantly influenced by the actions of the person it is avoiding. Furthermore, a robot that would either collide with or ceases working when a person enters the workspace is not ideal for HRC.

The strategies also do not consider how changes in the robot’s proximity or speed may affect the person’s psychological well-being. This is despite the growing research that shows there is a link between them, predominantly in HRI and social robotics (which is discussed in further detail in Section 3). By taking the factors mentioned earlier into consideration, one can argue that the current guidance in ISO/TS 15066:2016 can be improved to maintain the physical and psychological safety of the operator. HAN [12], a field of mobile robotics, does take both physical and mental safety into consideration. The lessons learned from studies into this recent field may provide a means to inform and improve the current guidelines.

3 Human aware navigation

To operate and be accepted in the same environment as people, a robot must not only be able to avoid collisions with them but also recognize and act accordingly to the social behavior of humans [35]. Path finding and avoidance algorithms that take this factor into account are now finding increased relevance in robotics [36]. Alami et al. [37] argued that for navigation to be considered human-aware, the robot should be able to convey in an understandable manner its current state, current goal, and imminent move. Since this definition in 2000, the criteria for HAN have become more sophisticated as the understanding of the relationships between a robot’s attributes and the person’s psychological well-being has improved. The improved understanding comes because of advancing technology, where it has not only increased the types of interactions people can have with robots but also the means of assessing a person’s reaction during the interaction.

Although a robot may not have anthropomorphic features, concepts such as personality and intent will still be applied by people onto the robot [38,39,40]. This understanding has led many of the social psychological concepts for human–human interaction to form the base for the psychological concepts in HRI; however, the direct nature of the application has been the source of debate [41]. Some of the social psychological concepts, such as trust and workload, have been previously researched in workplaces with increasing automation [42,43], which can aid in providing a foundation and comparison. In this sense, trust is determined as the ability of the machine to complete the task without harming those around it, which has become a key area of research with the development of self-driving vehicles [44]. Just as the passenger must be able to trust the vehicle to take them to their destination without incident, so must the operator be able to trust the robot they are collaborating with be able to complete the job without incident. As the operator gains more trust in the robot with the task, the efficiency of the HRC increases up to a point [45,46,47]. Therefore, it is essential that for HRC to become accepted by workers, a deeper understanding of how the robot can affect trust needs to be developed. A potential solution to this is to bring operators into the early design stages, with an extended study to determine how such factors scale over time. In experimental environments, the robots may perform differently than in the industrial environment. Therefore, expectations will be more accurately set for the workers and they will be more familiar with the robot’s capabilities and their role within the task. Furthermore, it sets more realistic expectations of what the robot can do, reducing the potential dissatisfaction when the robot is not as adaptable as initially perceived [48]. Takayama and Pantofaru [49] showed that participants would allow a robot to approach closer during initial interactions when they had at least 1 year or more experience. However, as the person gains more experience with a particular robot, this becomes less dependent on experience and more dependent on the robot being used as the task is completed [50]. Experience can also play a key role in the efficiency of the interaction; therefore, it is a key to introduce the human collaborators at the earliest possible stage. Although the human always having priority is generally considered an important aspect for social path planning [51], people unfamiliar with a robot and its capabilities will usually opt to give way [52]. Therefore, if operators gain experience early in the development stage for how the robot will react based on their reactions, it may alleviate this confusion. Furthermore, there is a lack of research into the long-term effects of HRI on psychological factors and how they change over this time.

Research into human factors in human–robot collaboration is still a relatively new field, but current studies are establishing relationships between robot attributes and a person’s psychological attributes. These relationships will be discussed in the following two sections. The first will focus on mobile robotics, which has received more focus on the psychological impacts of HRI due to the roles these robots are envisioned to have, for example, social robotics. The proceeding section will then focus on robot arms, which have received increased attention of late as advances in technology have allowed physical barriers to be removed.

3.1 Mobile robots

Even though mobile robots have been deployed for use in HRI since the late 1990s [10], and the psychological effects of robotic behavior were becoming prominent in the early 2000s [11], it was not until the mid-2000s that these were combined to design a path planner that would incorporate them both into what was coined as HAN. To achieve HAN, Sisbot et al. [12,53,54] set the following criteria:

  • The motion must not result in physical harm to the person.

  • The motion must be able to complete the task reliably and sufficiently.

  • The motion considers the preferences and requirements of the person.

While both the first and second criteria are achievable with traditional path finding methods, the last criterion requires a more thorough understanding of how a robot’s motion can affect the person. Mateus et al. [55] further defined the last criteria by stating the goals to achieve this: comfort, respect for social rules, and naturalness. One of the attributes of a robot’s motion that has been considered is its proximity. Human–human interaction already has a well-established model for socially acceptable proximity, which can serve as a template for HRI. As described by Hall [56], proxemics provide the fundamental outline for a socially acceptable distance for people, which can be utilized as a reference for socially acceptable distances for robots. These socially acceptable distances can be used to designate comfort zones, with the closer the zone, the higher levels of stress the person would experience should a stranger enter it. However, there is no consensus on the most accurate social spacing model. The most prominent models for social spacing around an individual are shown in Figure 1. Each of these shapes can be applied depending on the context of the interaction, but they all suggest that the social sensitivity of the person decreases with increasing distance away from the person. The closest distance a person will allow a robot to approach is highly subjective and has been shown to be linked to personality traits [57], where the robot is “looking” [58], the size of the robot [11,59], and whether the person or the robot is approaching [60].

Figure 1 
                  The model displays four social spacing shapes around a person, which dictate the different comfort zones: (a) concentric circles, (b) egg shaped, (c) elliptical, and (d) elliptical, which is skewed on the dominant side. Adapted from ref. [61].
Figure 1

The model displays four social spacing shapes around a person, which dictate the different comfort zones: (a) concentric circles, (b) egg shaped, (c) elliptical, and (d) elliptical, which is skewed on the dominant side. Adapted from ref. [61].

Pandey and Alami [62,63] used such a model to develop an algorithm, leading the robot to avoid an elliptical space (compared with Figure 1), deemed too close for comfort, with socially acceptable zones based on one’s field of view. The robot’s path would then be planned using a combination of an A* algorithm and Voronoi diagrams. When compared to a static obstacle avoidance algorithm, which does not apply social distances, the robot was considered to have performed in a less uncomfortable manner. Sisbot et al. [12,53] developed a multilevel motion planner that generates a cost grid around a detected person in an environment. The associated costs of the grid are determined first by the physical distance to the person and then by the person’s perceived vision. The more effort required to see the robot on the path the higher the cost, with the highest costs being behind the person or behind an obstacle. Therefore, the robot plans to be as visible for as long as possible and only enters the social proxemic zone when necessary. The use of a cost grid allows for an existing algorithm (in this case A* algorithm) to be iterated on for navigating in a socially acceptable manner. Sun et al. [64] also identified the sudden appearance of a robot around a corner as socially unacceptable, generating a higher cost around corners and blind spots. Vega-Magro et al. [65] took an applied approach by generating cost maps around items in the environment based on the way a person would use the item, e.g., a trapezoidal area in front of a TV. When evaluated, either in simulation or real-life, all of these algorithms were capable of maintaining the set socially acceptable distances, even with multiple people included in the calculations.

Ferrer et al. [66,67] also applied a proxemics-based model when developing their mobile robots, Tibi and Dabo, which reduced the social work caused to a person as a result of the robot navigating a crowd. The model used was based on the social force model (SFM) developed by Helbing and Molnár [68], which was designed as a means to describe the self-organization of pedestrians in a crowd. Similar to APFs, the SFM generates repulsive forces around obstacles (in this case other pedestrians) and attractive forces toward the goal. Ferrer and Sanfeliu [69] iterated on their design further by adding the capability of predicting the person’s reaction to the robot’s possible actions and taking the course with the lowest social work impact, again reducing the stress further. Shiomi et al. [70] expanded on this model to develop a socially acceptable, human-like collision avoidance system for a robot moving among pedestrians. According to ref. [69], the model was first calibrated by the robot moving toward the person without collision avoidance to determine the socially acceptable distance instead of using proxemics. The system also operated on a collaborative avoidance basis, where both parties move to avoid the collision, as is the case in most human–human collision avoidance situations [52]. Surveys taken by participants reported that the robot with the updated model gave the perception of being safer, as well as the results showing the avoidance system performed objectively safer. Although SFM has been shown to perform well outdoors, it has been shown to perform less well indoors, where the repulsive vectors can result in the robot taking unnecessary detours to avoid collisions [71].

The aforementioned algorithms show a key limitation in the analysis of HAN, which is the metrics used for evaluation. Despite the algorithms being designed to improve the psychological well-being of the person, the psychological factors were not assessed. Instead, they rely on physical distance data and interpreted socially acceptable distances based on proxemics. Robust studies of the algorithms with multiple participants, where factors such as comfort, workload, and trust may prove beneficially in further understanding the variables to be added to existing algorithms to make the path planning “Human Aware.” While they show the promise of adapting existing algorithms to incorporate psychological factors, they only consider proximity. This is significant as studies have previously highlighted mobile robot attributes other than proximity, which contribute to a person’s psychological well-being. Predictable movement is where the movement is the same as the movement expected by the person [72]. Motion that is more predictable than human motion has been shown to be preferential when collaborating with a robot [73]. Another attribute that has been shown to influence a person’s comfort is the speed of the robot. Butler and Agah [11] used a Nomadic Scout II at varying speeds when approaching a person, after which the person was asked to complete a 5-point Likert scale survey ranging from Very Uncomfortable to Very Comfortable. The speeds that scored higher on this scale were between 0.25 and 0.4 m/s, but a significant change to uncomfortable was not reported until 1.0 m/s, while a decrease in comfort and an increase in frustration were suggested to be possible at speeds below 0.25 m/s. Sardar et al. [74] used multiple scales (Negative Attitude Towards Robots, Source Credibility, Perceived Human-Likeness, and Interpersonal Attraction Scale) as well as physiological measures to assess participants’ compensatory behaviors when a robot approaches them at two different speed settings. They found that at the higher speed, participants reacted with more “pleasant” facial expressions and that the robot was more trustworthy (which may be attributed to the greater noise generated by the robot at higher speeds, leading to increased awareness of the robot’s location). Despite the potential for relationships between a robot’s speed and a person’s psychological well-being to exist, the number of studies into this is quite limited. This may be due to the speed being limited in the perceived roles for the robot during HRI, whether by environment or the nature of the task. Furthermore, the studies have little cohesion as the metrics used are not consistent and tend to be tailor made to the experiment rather than using a universal method for measurement. The lack of consensus on psychological concepts is challenging to overcome due to the inherent subjective nature and has been problematic for workload for nearly 40 years [42].

From this review into HAN in mobile robotics, the path planner should utilize a model based on proxemics spacing. However, proxemics should not be considered the only attribute, which contributes to a path planner to make it “Human Aware.” A model incorporating an understanding of a person’s available field of view has also been shown to help improve the interaction [12]. The robot’s speed and predictability when interacting with a person are also important factors, which require further study [11,73]. It is of note, however, that mobile robots used in the aforementioned speed and proximity studies are smaller or equal to the height of an average person. This will have to be taken into consideration when assessing the application of HAN to industrial robot arms, as they are often larger with a higher payload capacity. As some of the studies have shown there to be a relationship between robot size and acceptable proximity, it is essential to understand the effects of proximity and speed for robots larger than a person. There is also a persistent lack of consensus in psychological analysis tools within robotics that should be addressed. Many of the studies within this review use different methods for assessing a participant’s “comfort,” without a formal or agreed upon definition. This leads to difficulty in comparing studies as the metric is consistently vague. Therefore, it would be of importance in this field for a formal definition of psychological concepts such as comfort and a standardized scale for assessment.

Considering these studies into mobile robotics, the following criteria can be a considering key for HAN:

  • The robot must avoid collision with persons and obstacles during navigation.

  • The movements of the robot must be predictable and smooth.

  • The path planner should be informed by the psychological needs of the people it is intended to interact with.

Section 3.2 reviews the few studies that have included psychological safety factors when designing a path finding and obstacle avoidance algorithm on robot arms.

3.2 Robot arms

Unlike physical safety maintenance, the development of HAN in robot arms is less developed than in mobile robotics. This could be due to the robot arms being more applied in industry, where they are often separated by physical barriers. Such a setting greatly limits the opportunity for HRI and, as a result, reduces the relevance of mental safety considerations. Once the barriers are removed, however, robot arms should be meeting the same psychological safety measures as mobile robots. Because the removal of the barriers and the introduction of collaborative robots are becoming more prominent, mental safety with robot arms is more relevant than ever.

Although robot arms and mobile robots share many qualities, there are some significant differences. One would be the extra dimension of available movement and the added degrees of freedom of movement in robot arms. This inherently makes the robot arms more complex, making its planned movements harder to read [75]. Due to the different applications and motions, a robot arm tends to be associated with, the addition of a “face” or expressive character is not widely implemented, the main exception being the Baxter robots by ReThink Robotics. Therefore, this presents one of the challenges for a robot arm in HAN: clear legibility and predictability of movement. One of the main challenges for legibility is that different viewpoints and different robots will give varying degrees of legibility [76]. Dragan et al. [77] set up an experiment that would assess objective time to complete the task and the subjective perceptions of the participants during HRC, while the robot operated in three different movement modes: functional, legible, and predictable. The person and the robot would work together to make tea, with the type of tea being inferred from which color cup the robot was seen to be reaching for. The objective results showed that participants reacted significantly faster with predictable motion against functional, with a further 33% reduction in reaction speed with legible motion. In turn, this reduced the time taken to complete the task. The objective data also concur with the subjective perceptions where trust, fluency, safety, perceived closeness, robot contribution, predictability, legibility, and capability were significantly higher for legible and predictable motion over functional. These findings highlight that by considering the perceptions of the person the task in HRC will not only be completed faster but also lead to improved job satisfaction and acceptance.

Speed of the robot arm during HRC is another key measure. An early study into the relationship between the speed of a robot arm during collaboration and the person’s perceptions of the motion was conducted by Shibata and Inooka [78], first using a simulation and then using a PUMA 561 robot arm. By using a 7-point Likert scale, participants were asked to assess the motion using seven adjective pairs: pleasant-unpleasant, smooth-awkward, fast-slow, careful-careless, interesting-boring, skilled-unskilled, and humanlike-mechanical. During this study, while the robot arm moved at the slowest speed (580 mm/s), it was perceived as too slow, unskilled, and boring. It should be noted that this is over twice the allowable speed under current guidelines. A possible reason for this may be the limitations of the path finding and joint movements of the time, whereas a modern robot arm would be able to provide smoother motions at lower speeds. Kulic and Croft [79] used a combination of physiological (skin conductance, heart muscle activity, and corrugator muscle activity) and survey (5-point Likert scale for anxiety, calm, and surprise) data to assess participant’s reactions to different speeds of a robot arm. This study found that as the speed of the robot arm increased, so did their anxiety, surprise, and arousal. An exploratory study by Charalambous et al. [80] explored the factors that would influence a person’s trust during HRC with two industrial robots of different sizes. After completing a hand-over task with each robot, the participants were given semi-structured interviews. All participants reported that the motion and the speed of the robot had influenced their trust. The larger robot also resulted in a larger emphasis on speed, highlighting that the person’s perceptions of trustworthiness at a certain speed may be influenced further by the robot’s size.

Proximity was identified as a key attribute during HRI in mobile robotics and thus has been the focus of some research in HRC with robot arms. Tan et al. [81] measured the changes in participants’ mental workload during HRC with changing robot proximity. The mental workload was measured objectively (skin potential reflex) and subjectively (6-point Likert scale rating fear and surprise). Although the physiological measure showed a negative relationship with proximity, the subjective measures were very low across the different proximities and showed no significant difference. MacArthur et al. [82] conducted a more thorough analysis using known surveys (Human Robot Trust Scale, Interpersonal Trust Questionnaire, and Trust in Automation Scale) to establish a negative relationship between trust in the robot and robot distance. A decrease in trust with decreasing distance was also reported by Stark et al. [83], as participants moved away from the robot arm as it entered their personal space. As with speed, the trends in proximity are similar between robot arms and mobile robots. Both attributes are also controlled within ISO/TS 15066:2016 to maintain physical safety; however, these relationships establish that even within this guidance, the robot can have a negative impact on the person’s psychological safety.

As with mobile robots, a common limitation is the subjective nature of the measurements. To overcome the challenges from subjective measures, some studies have looked into methods for objective measures. An early study by Kramer [84] reviewed a range of physiological measures including event-related brain potentials (ERP), electroencephalographic (EEG) activity, endogenous eye blinks, and pupil diameter among others. Although the report found that no single measurement technique was adequate to assess a single dimension of workload by itself, it could be argued that this was more of a limitation in technology available at the time. Brookings et al. [85] evaluated physiological changes in workload by comparing eye blink, heart rate, respiration, saccade, and EEG response in various air traffic control tasks with the responses to NASA-TLX questionnaires. Of the objective measures, EEG response showed the most sensitivity. As technology as improved, however, other physiological measures such as electrocardiograms, skin conductance, respiration, skin temperature, and eye tracking have been found to perform as well as EEG response when measuring objective workload alongside subjective [86]. EEG response has been shown to be a successful measure, with studies comparing EEG response and NASA-TLX responses showing agreement in Human–Robot Cooperation [87,88]. Objective measures, therefore, show promise as a means of objectively measuring workload. A key limitation, however, is the currently intrusive equipment required to acquire the readings. As technology improves, as well as our understanding of the physiological responses to increases in workload, they will certainly prove a valuable asset for improving the psychological safety of the person during HRI and HRC.

By improving the psychological safety of a person during HRC, there is a desirable side effect. A robot arm utilizing HAN generally increases the operator’s comfort, improving their efficiency [89] and also the efficiency of the robot as it will have less idle time [46,90]. The reduced idle time may be attributed to the implementation of path predictive planners, a key part of HAN. Therefore, rather than reacting to the sudden appearance of the person and waiting until they have vacated to a safe distance, the robot can adapt and move around to prevent the emergency stop taking place. By accurately predicting where the person will be in accordance with their own position, the robot can also reduce annoyance, surprise, or obstruction [12]. Despite the perceived improvements in comfort levels experienced by people when a robot is using HAN, ref. [89] highlights that their study and similar previous studies only observe these changes for a relatively short period of time. As the end goal of many studies is for a system to be implemented in an industrial environment, the robots would be collaborating with human workers for an extended period of time, which may present unforeseen variables.

In industrial HRI, overall trust in the robot is linked to trust in the robot completing the task. Ref. [45] assessed the extent to which HRI task efficiency is dependent on how much the human trusted the robot with the task. The results showed that task efficiency did improve as trust increased up to a point, from which an overreliance in the robot would then decrease performance. This would suggest that there is an optimal level of trust over which performance would be impaired. As with physical safety, only path finding and obstacle avoidance, mobile robots, and robot arms can follow similar principles, with some minor changes due to the way they move throughout the world and the different applications they have. Path predictive planners, speed, and proximity are all measures that can be transferred across from mobile robots to robot arms with modification. Nevertheless, the impact these have on the person during HRI requires further study. The data from these studies can then be implemented into a HAN algorithm for a robot arm and also aid in developing improved safety guidelines for robot arms in industrial HRI.

4 Conclusion

At the beginning of this review, a question was posed: How can the current Human–Robot Interaction safety guidance be optimized for maintaining the safety of the operator? The safety guidance set in the technical specifications of ISO/TS 15066:2016 presents methods of reducing the likelihood of physical injury as a result of a robot’s actions but, as highlighted in Section 1 of this article, that is considering only the potential physical impact in HRC. Therefore, they can be improved. To improve the quality of the human element of HRC, it is important to develop a better understanding of how the robot’s action (or inaction) can influence the operator. By failing to consider the psychological element of the interaction, which would be experienced by the operator during HRC, then the efficiency of the team can be reduced, as well as acceptance and job satisfaction. One of the primary solutions for this could be the inclusion of operators at the early design stages of a collaborative work cell. In doing this, the parameters for the robot can be set more appropriately for the task. These parameters can then be implemented in another potential solution: using the guidance that informs HAN to supplement existing guidance. Therefore, a second question was posed: How can the approaches for HAN in mobile robotics be applied to an industrial robot arms? The reviews in Sections 3.1 and 3.2 show the potential crossover between human aware planners in mobile robotics and in robot arms, with the following areas identified as readily transferrable:

  • Interpretation of a person’s intent through machine vision and learning

  • Robot motion based upon a person’s field of view

    However, other areas that have been identified but require further development include:

  • Legible and predictable robot motion

  • How do the robot’s speed and proximity affect the person’s comfort

  • Link between the robot’s size and shape with person’s comfort

  • The appropriate spacing model for an industrial robot arm

  • Psychological attributes of a person which are influenced by a robot’s attributes

The transfer of skills is also limited by a lack of research into the different perceptions of safety that occur between a mobile robot and an industrial robot arm. Although the robot arm will have an extra dimension of movement, the base is fixed, and there is a limit to its reach, which is not possible in mobile robots. Although physical only safety operates on the principles that a person can be treated as an obstacle in the same way as any other object, HAN recognizes that the cognitive abilities of a person require special treatment. Although an obstacle will not be affected by the robot’s speed, size, proximity, or gaze, studies have established relationships between these elements and a person’s psychological well-being. Without considering the human aspect of HRC, there is a risk for the robots not to be fully accepted and the efficiency of the tasks to be reduced. Furthermore, although the results from studies into mobile robotics can inform approaches for HAN in robot arms, the differences between the two types of robotics should be acknowledged. Therefore, there is a need for studies to better understand and develop the relationships between a robot arm’s attributes and the relationships they have with a person’s psychology. A further developed understanding of these can allow for better evaluation of the algorithms with respect to the person’s psychological well-being. While proximity to the person can prove a useful and easily measurable metric, it cannot be considered the only one for determining whether a robot is “Human Aware.”

Despite the limitations of studies into HAN, it is clear that speed and proximity of a robot arm can affect a person’s comfort, trust, and workload. This can lead to an objective improvement when using HAN in HRC: the efficiency of both the human and the robot is increased. Even with this improved efficiency, HRC is not widespread in the industry. One of the main reasons for the relatively few occurrences of such interaction can be attributed to safety regulations being behind the advances in technology. Nevertheless, with further studies and research into HAN, which highlight the advantages mentioned in Section 3.2, as well as the significant improvements in robotics safety without the requirement of physical barriers, this is due for a change. A key limitation of studies into HAN, and Human Factors in robotics, is the lack of an agreed upon formal definition for many of the social psychological concepts. This is further hindered by the lack of a universal measurement tool for the concepts within HRC. However, there are some tools that are seeing more prominence in the measurement of increasing automation and may prove beneficial to the evaluation of HAN.

The future proposed by Asimov envisioned a successful shared working environment between robots and humans based on the understanding that the robot interprets the human as that: a human and not just a dynamic obstacle. There may be less vacuum tubes and mining stations on Mercury, but by furthering our knowledge of this key aspect in HRI, the acceptance of industrial robot arms in a shared workspace can be considered that much closer.

This article is an extended version of the published manuscript from ICRES 2019: M. Story, C. Jaksic, S. R. Fletcher, P. Webb, and J. Carberry, “Evaluating the use of HAN in Industrial Robot Arms” in ICRES 2019: International Conference on Robot Ethics and Standards, 2019, no. July, pp. 79–86.

  1. Funding information: This study was funded an EPSRC BAE Systems Case Award.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission. The authors applied the SDC approach for the sequence of authors.

  3. Conflict of interest: Authors state no conflict of interest.

  4. Data availability statement: Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

References

[1] I. Asimov, I, Robot, Great Britain, Dobson Books Ltd, 1967.Search in Google Scholar

[2] IFR, “Executive Summary World Robotics 2017 Industrial Robots,” 2017.10.1002/wow3.113Search in Google Scholar

[3] IFR, “Executive Summary World Robotics 2018 Industrial Robots,” 2018.10.1002/wow3.137Search in Google Scholar

[4] IFR, “Executive Summary World Robotics 2019 Industrial Robots,” 2019.10.1002/wow3.149Search in Google Scholar

[5] G. Charalambous, S. R. Fletcher, and P. Webb, “The development of a human factors readiness level tool for implementing industrial human–robot collaboration,” Int. J. Adv. Manuf. Technol., vol. 91, no. 5–8, pp. 2465–2475, 2017, https://doi.org/10.1007/s00170-016-9876-6.10.1007/s00170-016-9876-6Search in Google Scholar

[6] R. Bloss, “Collaborative robots are rapidly providing major improvements in productivity, safety, programing ease, portability and cost while addressing many new applications,” Ind. Robot., vol. 43, no. 5, pp. 463–468, 2016, https://doi.org/10.1108/IR-05-2016-0148.10.1108/IR-05-2016-0148Search in Google Scholar

[7] T. Iio, S. Satake, T. Kanda, K. Hayashi, F. Ferreri, and N. Hagita, “Human-like guide robot that proactively explains exhibits,” Int. J. Soc. Robot., vol. 12, no. 2, pp. 549–566, 2019, https://doi.org/10.1007/s12369-019-00587-y.10.1007/s12369-019-00587-ySearch in Google Scholar

[8] F. Del Duchetto, P. Baxter, and M. Hanheide, “Lindsey the tour guide robot – usage patterns in a useum long-term deployment,” 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2019, pp. 1–8.10.1109/RO-MAN46459.2019.8956329Search in Google Scholar

[9] S. Nonaka, K. Inoue, T. Arai, and Y. Mae, “Evaluation of human sense of security for coexisting robots using virtual reality. 1st report: evaluation of pick and place motion of humanoid robots, Proceedings of IEEE International Conference on Robotics and Automation, ICRA ’04, vol. 3, 2004, pp. 770–2775, https://doi.org/10.1109/ROBOT.2004.1307480.10.1109/ROBOT.2004.1307480Search in Google Scholar

[10] W. Burgard, A. Cremers, and D. Fox, “The interactive museum tour-guide robot,” Artif. Intell., vol. 114, no. 1–2, pp. 3–55, 1998.10.1016/S0004-3702(99)00070-3Search in Google Scholar

[11] J. T. Butler and A. Agah, “Psychological effects of behavior patterns of a mobile personal robot,” Auton. Robot, vol. 10, no. 2, pp. 185–202, 2001, https://doi.org/10.1023/A:1008986004181.10.1023/A:1008986004181Search in Google Scholar

[12] E. A. Sisbot, K. F. Marin-Urias, R. Alami, and T. Siméon, “A human aware mobile robot motion planner,” IEEE Trans. Robot., vol. 23, no. 5, pp. 874–883, 2007, https://doi.org/10.1109/TRO.2007.904911.10.1109/TRO.2007.904911Search in Google Scholar

[13] F. Flacco, T. Kroger, A. de Luca, and O. Khatib, “A depth space approach to human–robot collision avoidance,” Robotics and Automation (ICRA), 2012 IEEE International Conference, 2012, pp. 338–345, https://doi.org/10.1109/ICRA.2012.6225245.10.1109/ICRA.2012.6225245Search in Google Scholar

[14] ISO, “ISO/TS 15066 – Collaborative Robots: Present Status,” 2016.Search in Google Scholar

[15] ISO, “ISO 10218 ‘Robots and robotic devices – Safety requirements for industrial robots’, with parts I (‘Robots’) and 2 (‘Robot systems and integration’),” 2011.Search in Google Scholar

[16] E. W. Dijkstra, “A note on two problems in connection with graphs,” Numerische Mathematik, vol. 1, pp. 269–271, 1959, https://doi.org/10.1007/BF01386390.10.1007/BF01386390Search in Google Scholar

[17] P. E. Hart, N. J. Nilsson, and B. Raphael, “Formal basis for the heuristic determination of minimum cost paths,” IEEE Trans. Syst. Sci. Cybern., vol. 4, no. 2, pp. 100–107, 1968.10.1109/TSSC.1968.300136Search in Google Scholar

[18] O. Khatib and J.-F. Le Maitre, “Dynamic control of manipulators operating in a complex environment,” Theory Pract. Robot. Manipulators, 3rd CISM-IFToMM Symp., vol. 267, pp. 267–282, 1978.Search in Google Scholar

[19] O. Khatib, “Real-time obstacle avoidance for manipulators and mobile robots,” Int. J. Robot. Res., vol. 5, no. 1, pp. 90–98, 1986.10.1109/ROBOT.1985.1087247Search in Google Scholar

[20] M. Popp, S. Prophet, G. Scholz, and G. F. Trommer, “A novel guidance and navigation system for MAVs capable of autonomous collision-free entering of buildings,” Gyroscopy Navigation, vol. 6, no. 3, pp. 157–165, 2015, https://doi.org/10.1134/S2075108715030128.10.1134/S2075108715030128Search in Google Scholar

[21] S. Prophet, G. Scholz, and G. F. Trommer, “Collision avoidance system with situational awareness capabilities for autonomous MAV indoor flights,” 2017 24th Saint Petersburg International Conference on Integrated Navigation Systems (ICINS), IEEE, 2017, pp. 1–8, https://doi.org/10.23919/ICINS.2017.7995565.10.23919/ICINS.2017.7995565Search in Google Scholar

[22] A. Azzabi and K. Nouri, “Path planning for autonomous mobile robot using potential field method,” 2017 International Conference on Advanced Systems and Electrical Technologies, vol. 9, no. 4, pp. 389–394, 2017, https://doi.org/10.5391/IJFIS.2009.9.4.315.10.1109/ASET.2017.7983725Search in Google Scholar

[23] O. Montiel, U. Orozco-Rosas, and R. Sepúlveda, “Path planning for mobile robots using bacterial potential field for avoiding static and dynamic obstacles,” Expert. Syst. Appl., vol. 42, no. 12, pp. 5177–5191, 2015, https://doi.org/10.1016/j.eswa.2015.02.033.10.1016/j.eswa.2015.02.033Search in Google Scholar

[24] P. Vadakkepat, K. C. Tan, and W. Ming-Liang, “Evolutionary artificial potential fields and their application in real time robot path planning,” Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512), vol. 1, 2000, pp. 256–263, https://doi.org/10.1109/CEC.2000.870304.10.1109/CEC.2000.870304Search in Google Scholar

[25] L. Zhou and W. Li, “Adaptive artificial potential field approach for obstacle avoidance path planning,” 2014 Seventh International Symposium on Computational Intelligence and Design, vol. 1, 2014, pp. 429–432, https://doi.org/10.1109/ISCID.2014.144.10.1109/ISCID.2014.144Search in Google Scholar

[26] G. Fedele, L. D’Alfonso, F. Chiaravalloti, and G. D’Aquila, “Obstacles avoidance based on switching potential functions,” J. Intell. Robotic Syst., vol. 90, pp. 387–405, 2018, https://doi.org/10.1007/s10846-017-0687-2.10.1007/s10846-017-0687-2Search in Google Scholar

[27] J. U. Kuehnle, Z. Xue, M. Stotz, J. M. Zoellner, A. Verl, and R. Dillmann, “Grasping in depth maps of time-of-flight cameras,” ROSE 2008 – IEEE International Workshop on Robotic and Sensors Environments Proceedings, October 2008, pp. 132–137, https://doi.org/10.1109/ROSE.2008.4669194.10.1109/ROSE.2008.4669194Search in Google Scholar

[28] Q. Wang, G. Kurillo, F. Ofli, and R. Bajcsy, “Evaluation of pose tracking accuracy in the first and second generations of microsoft Kinect,” Proceedings – 2015 IEEE International Conference on Healthcare Informatics, ICHI 2015, 2015, pp. 380–389, https://doi.org/10.1109/ICHI.2015.54.10.1109/ICHI.2015.54Search in Google Scholar

[29] T. B. Moeslund and E. Granum, “A survey of computer vision-based human motion capture,” Computer Vis. Image Underst., vol. 81, no. 3, pp. 231–268, 2001, https://doi.org/10.1006/cviu.2000.0897.10.1006/cviu.2000.0897Search in Google Scholar

[30] M. J. Rosenstrauch, T. J. Pannen, and J. Krüger, “Human robot collaboration – Using kinect v2 for ISO/TS 15066 speed and separation monitoring,” Procedia CIRP, vol. 76, pp. 183–186, 2018, https://doi.org/10.1016/j.procir.2018.01.026.10.1016/j.procir.2018.01.026Search in Google Scholar

[31] P. Rückert, J. Adam, B. Papenberg, H. Paulus, and K. Tracht, “Calibration of a modular assembly system for personalized and adaptive human robot collaboration,” Procedia CIRP, vol. 76, pp. 199–204, 2018, https://doi.org/10.1016/j.procir.2018.01.041.10.1016/j.procir.2018.01.041Search in Google Scholar

[32] S. Yang, W. Xu, Z. Liu, Z. Zhou, and D. T. Pham, “Multi-source vision perception for human–robot collaboration in manufacturing,” ICNSC 2018 – 15th IEEE International Conference on Networking, Sensing and Control, 2018, pp. 1–6, https://doi.org/10.1109/ICNSC.2018.8361333.10.1109/ICNSC.2018.8361333Search in Google Scholar

[33] Y. Yang, H. Yan, M. Dehghan, and M. H. Ang, “Real-time human-robot interaction in complex environment using kinectic v2 image recognition,” Proceedings of the 2015 7th IEEE International Conference on Cybernetics and Intelligent Systems, CIS 2015 and Robotics, Automation and Mechatronics, RAM 2015, 2015, pp. 112–117, https://doi.org/10.1109/ICCIS.2015.7274606.10.1109/ICCIS.2015.7274606Search in Google Scholar

[34] P. Zhang, P. Jin, G. Du, and X. Liu, “Ensuring safety in human–robot coexisting environment based on two-level protection,” Ind. Robot., vol. 43, no. 3, pp. 264–273, 2016, https://doi.org/10.1108/IR-12-2015-0222.10.1108/IR-12-2015-0222Search in Google Scholar

[35] Y. Nakauchi and R. Simmons, “A social robot that stands in line,” Auton. Robot, vol. 12, no. 3, pp. 313–324, 2002, https://doi.org/10.1023/A:1015273816637.10.1109/IROS.2000.894631Search in Google Scholar

[36] K. Charalampous, I. Kostavelis, and A. Gasteratos, “Recent trends in social aware robot navigation: A survey,” Robot. Auton. Syst., vol. 93, pp. 85–104, 2017, https://doi.org/10.1016/j.robot.2017.03.002.10.1016/j.robot.2017.03.002Search in Google Scholar

[37] R. Alami, I. Belousov, S. Fleury, M. Herrb, F. Ingrand, J. Minguez, et al., “Diligent: Towards a human-friendly navigation system,” IEEE International Conference on Intelligent Robots and Systems, vol. 1, 2000, pp. 21–26, https://doi.org/10.1109/IROS.2000.894576.10.1109/IROS.2000.894576Search in Google Scholar

[38] M. Salem, G. Lakatos, F. Amirabdollahian, and K. Dautenhahn, “Would you trust a (faulty) robot?,” Proceedings of the Tenth Annual ACM/IEEE International Conference on Human–Robot Interaction – HRI ’15, 2015, pp. 141–148, https://doi.org/10.1145/2696454.2696497.10.1145/2696454.2696497Search in Google Scholar

[39] M. V. Giuliani and F. Fornara, “If I had a robot at home. Peoples’ representation of domestic robots,” J. Integr. Care, vol. 18, no. 4, pp. 19–25, 2010, https://doi.org/10.5042/jic.2010.0375.10.5042/jic.2010.0375Search in Google Scholar

[40] L. Giusti and P. Marti, “Interpretative dynamics in human robot interaction,” ROMAN 2006 – The 15th IEEE International Symposium on Robot and Human Interactive Communication, 2006, pp. 111–116, https://doi.org/10.1109/ROMAN.2006.314403.10.1109/ROMAN.2006.314403Search in Google Scholar

[41] C. H. Weinrich, M. Volkhardt, E. Einhorn, and H.-M. Gross, “Prediction of human avoidance behavior by lifelong learning for socially compliant robot navigation,” 2013 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2013, pp. 376–381, https://doi.org/10.1109/ICRA.2013.6630603.10.1109/ICRA.2013.6630603Search in Google Scholar

[42] B. Cain, “A review of the mental workload literature,” Defence Research and Development Canada, Toronto, 2007.Search in Google Scholar

[43] J. D. Lee and K. A. See, “Trust in automation: Designing for appropriate reliance,” Hum. Factors, vol. 46, no. 1, pp. 50–80, 2004, https://doi.org/10.1518/hfes.46.1.50_30392.10.1518/hfes.46.1.50_30392Search in Google Scholar PubMed

[44] N. L. Tenhundfeld, E. J. de Visser, K. S. Haring, A. J. Ries, V. S. Finomore, and C. C. Tossell, “Calibrating trust in automation through familiarity with the autoparking feature of a tesla model X,” J. Cognit. Eng. Decis. Mak., vol. 13, no. 4, pp. 279–294, 2019, https://doi.org/10.1177/1555343419869083.10.1177/1555343419869083Search in Google Scholar

[45] M. Chen, S. Nikolaidis, H. Soh, D. Hsu, and S. Srinivasa, “Planning with trust for human–robot collaboration,” Proceedings of the 2018 ACM/IEEE International Conference on Human–Robot Interaction, Feb 2018, pp. 307–315, https://doi.org/10.1145/3171221.3171264.10.1145/3171221.3171264Search in Google Scholar

[46] P. A. Lasota and J. A. Shah, “Analyzing the effects of human-aware motion planning on close-proximity human–robot collaboration,” Hum. Factors, vol. 57, no. 1, pp. 21–33, 2015, https://doi.org/10.1177/0018720814565188.10.1177/0018720814565188Search in Google Scholar PubMed PubMed Central

[47] V. V. Unhelkar, H. C. Siu, and J. A. Shah, “Comparative performance of human and mobile robotic assistants in collaborative fetch-and-deliver tasks,” Proceedings of the 2014 ACM/IEEE International Conference on Human–Robot Interaction – HRI ’14, 2014, pp. 82–89, https://doi.org/10.1145/2559636.2559655.10.1145/2559636.2559655Search in Google Scholar

[48] G. Hoffman and C. Breazeal, “Effects of anticipatory action on human–robot teamwork efficiency, fluency, and perception of team,” Proceeding of the ACM/IEEE International Conference on Human–Robot Interaction – HRI ’07, 2007, pp. 1–8, https://doi.org/10.1145/1228716.1228718.10.1145/1228716.1228718Search in Google Scholar

[49] L. Takayama and C. Pantofaru, “Influences on proxemic behaviors in human–robot interaction,” 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2009, 2009, pp. 5495–5502, https://doi.org/10.1109/IROS.2009.5354145.10.1109/IROS.2009.5354145Search in Google Scholar

[50] S. M. Merritt and D. R. Ilgen, “Not all trust is created equal: Dispositional and history-based trust in human-automation interactions,” Hum. Factors, vol. 50, no. 2, pp. 194–210, 2008, https://doi.org/10.1518/001872008X288574.10.1518/001872008X288574Search in Google Scholar PubMed

[51] J. V. Gómez, N. Mavridis, and S. Garrido, “Social path planning: generic human–robot interaction framework for robotic navigation tasks,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'13), 2013.Search in Google Scholar

[52] C. Vassallo, A. H. Olivier, P. Souères, A. Crétual, O. Stasse, and J. Pettré, “How do walkers avoid a mobile robot crossing their way?,” Gait Posture, vol. 51, pp. 97–103, 2017, https://doi.org/10.1016/j.gaitpost.2016.09.022.10.1016/j.gaitpost.2016.09.022Search in Google Scholar PubMed

[53] E. A. Sisbot, L. F. Marin, R. Alami, and T. Simeon, “A mobile robot that performs human acceptable motions,” IEEE International Conference on Intelligent Robots and Systems, 2006, pp. 1811–1816, https://doi.org/10.1109/IROS.2006.282223.10.1109/IROS.2006.282223Search in Google Scholar

[54] E. A. Sisbot, R. Alami, T. Simeon, K. Dautenhahn, M. Walters, and S. Woods, “Navigation in the presence of humans,” Proceedings of 2005 5th IEEE-RAS International Conference on Humanoid Robots, 2005, pp. 181–188, https://doi.org/10.1109/ICHR.2005.1573565.10.1109/ICHR.2005.1573565Search in Google Scholar

[55] A. Mateus, D. Ribeiro, P. Miraldo, and J. C. Nascimento, “Efficient and robust pedestrian detection using deep learning for human-aware navigation,” Robot. Auton. Syst., vol. 113, pp. 23–37, 2019, https://doi.org/10.1016/j.robot.2018.12.007.10.1016/j.robot.2018.12.007Search in Google Scholar

[56] E. T. Hall, The Hidden Dimension: Man’s Use of Space in Public and Private, Anchor Books, New York, 1969.Search in Google Scholar

[57] T. Nomura, T. Shintani, K. Fujii, and K. Hokabe, “Experimental investigation of relationships between anxiety, negative attitudes, and allowable distance of robots,” Proceedings of the 2nd IASTED International Conference on Human–Computer Interaction, IASTED-HCI ‘07, 2007, pp. 13–18.Search in Google Scholar

[58] L. Takayama and C. Pantofaru, “Influences on proxemic behaviors in human–robot interaction,” 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2009, 2009, pp. 5495–5502, https://doi.org/10.1109/IROS.2009.5354145.10.1109/IROS.2009.5354145Search in Google Scholar

[59] M. Obaid, E. B. Sandoval, J. Zlotowski, E. Moltchanova, C. A. Basedow, and C. Bartneck, “Stop! That is close enough. How body postures influence human–robot proximity,” 25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016, 2016, pp. 354–361, https://doi.org/10.1109/ROMAN.2016.7745155.10.1109/ROMAN.2016.7745155Search in Google Scholar

[60] M. L. Walters, K. Dautenhahn, R. Boekhorst, K. Koay, C. Kaouri, S. Woods, et al., “The influence of subjects’ personality traits on predicting comfortable human–robot approach distances,” Proceedings of COGSCI 2005 Workshop, 2005, pp. 29–37.10.1109/ROMAN.2005.1513803Search in Google Scholar

[61] P. Papadakis, P. Rives, and A. Spalanzani, “Adaptive spacing in human–robot interactions,” 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), 2014, pp. 2627–2632.10.1109/IROS.2014.6942921Search in Google Scholar

[62] A. K. Pandey and R. Alami, “A framework for adapting social conventions in a mobile robot motion in human-centered environment,” 2009 International Conference on Advanced Robotics, 2009, pp. 1–8.Search in Google Scholar

[63] A. K. Pandey and R. Alami, “A framework towards a socially aware mobile robot motion in human-centered dynamic environment,” IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 – Conference Proceedings, 2010, pp. 5855–5860, https://doi.org/10.1109/IROS.2010.5649688.10.1109/IROS.2010.5649688Search in Google Scholar

[64] S. Sun, X. Zhao, Q. Li, and M. Tan, “Inverse reinforcement learning-based time-dependent A* planner for human-aware robot navigation with local vision,” Adv. Robot., vol. 34, no. 13, pp. 888–901, 2020, https://doi.org/10.1080/01691864.2020.1753569.10.1080/01691864.2020.1753569Search in Google Scholar

[65] A. Vega-Magro, L. V. Calderita, P. Bustos, and P. Nunez, “Human-aware robot navigation based on time-dependent social interaction spaces: A use case for assistive robotics,” 2020 IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2020, 2020, pp. 140–145, https://doi.org/10.1109/ICARSC49921.2020.9096113.10.1109/ICARSC49921.2020.9096113Search in Google Scholar

[66] G. Ferrer, A. Garrell, and A. Sanfeliu, “Social-aware robot navigation in urban environments,” in 2013 European Conference on Mobile Robots, ECMR 2013 – Conference Proceedings, 2013, pp. 331–336, https://doi.org/10.1109/ECMR.2013.6698863.10.1109/ECMR.2013.6698863Search in Google Scholar

[67] G. Ferrer, A. Garrell, and A. Sanfeliu, “Robot companion: A social-force based approach with human awareness-navigation in crowded environments,” IEEE International Conference on Intelligent Robots and Systems, 2013, pp. 1688–1694, https://doi.org/10.1109/IROS.2013.6696576.10.1109/IROS.2013.6696576Search in Google Scholar

[68] D. Helbing and P. Molnár, “Social force model for pedestrian dynamics,” Phys. Rev. E, vol. 51, no. 5, pp. 4282–4286, 1995.10.1103/PhysRevE.51.4282Search in Google Scholar PubMed

[69] G. Ferrer and A. Sanfeliu, “Proactive kinodynamic planning using the Extended Social Force Model and human motion prediction in urban environments,” 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), 2014, pp. 1730–1735, https://doi.org/10.1109/IROS.2014.6942788.10.1109/IROS.2014.6942788Search in Google Scholar

[70] M. Shiomi, F. Zanlungo, K. Hayashi, and T. Kanda, “Towards a socially acceptable collision avoidance for a mobile robot navigating among pedestrians using a pedestrian model,” Int. J. Soc. Robot., vol. 6, no. 3, pp. 443–455, 2014, https://doi.org/10.1007/s12369-014-0238-y.10.1007/s12369-014-0238-ySearch in Google Scholar

[71] H. Khambhaita and R. Alami, “Assessing the social criteria for human–robot collaborative navigation: A comparison of human-aware navigation planners,” IEEE Int. Symposium Robot. Hum. Interact. Commun., vol. 688147, no. 688147, pp. 1140–1145, 2017, https://doi.org/10.1109/ROMAN.2017.8172447.10.1109/ROMAN.2017.8172447Search in Google Scholar

[72] A. D. Dragan, K. C. T. Lee, and S. S. Srinivasa, “Legibility and predictability of robot motion,” ACM/IEEE International Conference on Human–Robot Interaction, vol. 1, 2013, pp. 301–308, https://doi.org/10.1109/HRI.2013.6483603.10.1109/HRI.2013.6483603Search in Google Scholar

[73] E. Ivanova, G. Carboni, J. Eden, J. Kruger, and E. Burdet, “For motion assistance humans prefer to rely on a robot rather than on an unpredictable human,” IEEE Open. J. Eng. Med. Biol., vol. 1, pp. 133–139, 2020, https://doi.org/10.1109/ojemb.2020.2987885.10.1109/OJEMB.2020.2987885Search in Google Scholar

[74] A. Sardar, M. Joosse, A. Weiss, and V. Evers, “Don’t stand so close to me: Users’ attitudinal and behavioral responses to personal space invasion by robots,” HRI’12 – Proceedings of the 7th Annual ACM/IEEE International Conference on Human–Robot Interaction, 2012, pp. 229–230, https://doi.org/10.1145/2157689.2157769.10.1145/2157689.2157769Search in Google Scholar

[75] H. Kozima, M. P. Michalowski, and C. Nakagawa, “Keepon: A playful robot for research, therapy, and entertainment,” Int. J. Soc. Robot., vol. 1, no. 1, pp. 3–18, 2009, https://doi.org/10.1007/s12369-008-0009-8.10.1007/s12369-008-0009-8Search in Google Scholar

[76] C. Bodden, D. Rakita, B. Mutlu, and M. Gleicher, “Evaluating intent-expressive robot arm motion,” Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium, 2016, pp. 658–663. Available https://graphics.cs.wisc.edu/Papers/2016/BRMG16/roman2016_preprint.pdf10.1109/ROMAN.2016.7745188Search in Google Scholar

[77] A. D. Dragan, S. Bauman, J. Forlizzi, and S. S. Srinivasa, “Effects of robot motion on human–robot collaboration,” Proceedings of the Tenth Annual ACM/IEEE International Conference on Human–Robot Interaction – HRI ’15, vol. 1, 2015, pp. 51–58, https://doi.org/10.1145/2696454.2696473.10.1145/2696454.2696473Search in Google Scholar

[78] S. Shibata and H. Inooka, “Psychological evaluations of robot motions,” Int. J. Ind. Ergonom., vol. 21, no. 6, pp. 483–494, 1998, https://doi.org/10.1016/S0169-8141(97)00004-8.10.1016/S0169-8141(97)00004-8Search in Google Scholar

[79] D. Kulic and E. Croft, “Anxiety detection during human–robot interaction,” 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2005, pp. 389–394, https://doi.org/10.1109/IROS.2005.1545012.10.1109/IROS.2005.1545012Search in Google Scholar

[80] G. Charalambous, S. Fletcher, and P. Webb, “The development of a scale to evaluate trust in industrial human–robot collaboration,” Int. J. Soc. Robot., vol. 8, no. 2, pp. 193–209, 2016, https://doi.org/10.1007/s12369-015-0333-8.10.1007/s12369-015-0333-8Search in Google Scholar

[81] J. T. C. Tan, F. Duan, Y. Zhang, K. Watanabe, R. Kato, and T. Arai, “Human–robot collaboration in cellular manufacturing: Design and development,” 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009, pp. 29–34, https://doi.org/10.1109/IROS.2009.5354155.10.1109/IROS.2009.5354155Search in Google Scholar

[82] K. R. MacArthur, K. Stowers, and P. A. Hancock, “Human–robot interaction: proximity and speed-slowly back away from the robot!,” Adv. Hum. Factors Robot. Unmanned Syst., vol. 499, pp. 365–374, 2017, https://doi.org/10.1007/978-3-319-41959-6.10.1007/978-3-319-41959-6_30Search in Google Scholar

[83] J. Stark, R. R. C. Mota, and E. Sharlin, “Personal space intrusion in human–robot collaboration,” HRI ‘18: Companion of the 2018 ACM/IEEE International Conference on Human–Robot Interaction, 2018, pp. 245–246, https://doi.org/10.1145/3173386.3176998.10.1145/3173386.3176998Search in Google Scholar

[84] A. F. Kramer, “Physiological metrics of mental workload: A review of recent progress,” Multiple-task Perform, pp. 279–328, 1990, https://doi.org/10.1080/00140139.2014.956151.10.21236/ADA223701Search in Google Scholar

[85] J. B. Brookings, G. F. Wilson, and C. R. Swain, “Psychophysiological responses to changes in workload during simulated air traffic control,” Biol. Psychol., vol. 42, no. 3, pp. 361–377, 1996, https://doi.org/10.1016/0301-0511(95)05167-8.10.1016/0301-0511(95)05167-8Search in Google Scholar

[86] D. Novak, B. Beyeler, X. Omlin, and R. Riener, “Workload estimation in physical human–robot interaction using physiological measurements,” Interact. Comp., vol. 27, no. 6, pp. 616–629, 2015, https://doi.org/10.1093/iwc/iwu021.10.1093/iwc/iwu021Search in Google Scholar

[87] A. H. Memar and E. T. Esfahani, “Objective assessment of human workload in physical human–robot cooperation using brain monitoring,” ACM Trans. Hum.-Robot Interact., vol. 9, no. 2, pp. 1–21, 2020, art. 13, https://doi.org/10.1145/3368854.10.1145/3368854Search in Google Scholar

[88] S. B. Shafiei, A. S. Elsayed, A. A. Hussein, U. Iqbal, and K. A. Guru, “Evaluating the mental workload during robot-assisted surgery utilizing network flexibility of human brain,” IEEE Access, vol. 8, pp. 204012–204019, 2020, https://doi.org/10.1109/ACCESS.2020.3036751.10.1109/ACCESS.2020.3036751Search in Google Scholar

[89] P. A. Lasota and J. A. Shah, “Analyzing the effects of human-aware motion planning on close-proximity human–robot collaboration,” Hum. Factors, vol. 57, no. 1, pp. 21–33, 2015, https://doi.org/10.1177/0018720814565188.10.1177/0018720814565188Search in Google Scholar PubMed PubMed Central

[90] V. V. Unhelkar, P. A. Lasota, Q. Tyroller, R.-D. Buhai, L. Marceau, B. Deml, et al., “Human-aware robotic assistant for collaborative assembly: integrating human motion prediction with planning in time,” IEEE Robot. Autom. Lett., vol. 3, no. 3, pp. 2394–2401, 2018, https://doi.org/10.1109/LRA.2018.2812906.10.1109/LRA.2018.2812906Search in Google Scholar

Received: 2020-07-07
Revised: 2021-05-04
Accepted: 2021-05-13
Published Online: 2021-08-27

© 2021 Matthew Story et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 16.11.2024 from https://www.degruyter.com/document/doi/10.1515/pjbr-2021-0024/html
Scroll to top button