Abstract
Feedback design is an important aspect in person-following robots for older adults. This paper presents a user-centered design approach to ensure the design is focused on users’ needs and preferences. A sequence of user studies with a total of 35 older adults (aged 62 years and older) was conducted to explore their preferences regarding feedback parameters for a socially assistive person-following robot. The preferred level of robot transparency and the desired content for the feedback was first explored. This was followed by an assessment of the preferred mode and timing of feedback. The chosen feedback parameters were then implemented and evaluated in a final experiment to evaluate the effectiveness of the design. Results revealed that older adults preferred to receive only basic status information. They preferred voice feedback over tone, and at a continuous rate to keep them constantly aware of the state and actions of the robot. The outcome of the study is a further step towards feedback design guidelines that could improve interaction quality in person-following robots for older adults.
1 Introduction
Socially assistive robots (SARs) are being developed to assist older adults in a wide range of activities. A major effort is focused on instrumental activities of daily living (IADLs), tasks that are not mandatory for fundamental functioning but essential for independent living and interaction with the environment (e.g., activities like house-keeping, or shopping) [1]. Some of these activities can be made easier for older adults with the assistance of a person-following robot. The robot can be programmed to autonomously track the older adult and follow as he or she moves while providing assistance. It often has a compartment to carry the belongings of the user as it follows. This relieves the older adults from the physical stress of carrying loads while walking and performing other IADLs [2]. The robot can also serve the purpose of safety monitoring and companionship whilst supporting the older adult to maintain their independence in the home and outside.
Person-following is an important aspect in many service robotic applications [3] but it should be designed to conform with social norms and cultural values in order to inspire confidence and acceptability in the users. To create robots that move in socially acceptable manners, it is important to consider a multitude of parameters. These parameters include the robots’ speed, acceleration and deceleration properties, the lead human’s walking speed, and the appropriate physical proximity, as a function of the environment (e.g., a narrow corridor vs. an open room), the context (e.g., routine vs. urgent), and the user’s physical state and intentions [2, 4, 5]. In addition to the robot’s movement, there are other crucial components in the user’s interaction with the robot that can affect the quality of the interaction (QoI) [6].
Identifying and addressing these crucial components require user studies to improve and ensure smoother human-robot interactions in person-following robots [7]. This is particularly critical for older adults who have peculiar needs that require attention [8, 9]. Some of these needs could be perception-related such as decline in visual, audial and haptic acuity [10]. Needs are also related to cognitive challenges that affect the rate of understanding, integrating and processing of information [11]. Physical challenges connected with stability and movement limitations also require special consideration during design [9]. SARs designed for older adults must therefore cater for their needs to ensure that the age-related peculiarities do not partially or completely limit their use.
The current paper utilizes this user-needs approach in the development of a user-centered feedback design for a person-following robot that matches the perceptual capabilities and preferences of the potential users (older adults). The paper also reveals the positive influence such user-centered approach has on various aspects of interaction between the older adults and the robot.
2 Related work
Successful interaction requires communication between the human and the robot which generally involves sending and receiving of information to achieve specific goals [12]. Communicative actions when presented in the most comprehensible form promote understanding which aids a successful interaction of the user with the robot [13, 14, 15, 16]. The communicative actions from the robot to the user, herein referred to as feedback, are the presentation of information by the robot to the user in response to user’s actions.
The content of the feedback information provided is an essential influencing factor for successful interaction between humans and robots [17]. Feedback content is predicated on the desired level of transparency (LOT) [18, 19]. LOT, in this context, can be described as the degree of task, environment, robot, human, and interaction-related information provided to users while the robot is performing its task [20].
Task-related information consists of information provided by the robot to inform the user of its state, or its actions in relation to the task. It also includes information on the reasons for actions taken while executing the task, the next actions to be taken and the progress of the task. This was demonstrated in the situation awareness based transparency model (SAT) for autonomous systems developed by Chen et al. [21] which mirrors Endsley’s model of situation awareness [22]. An adaptation of that model in relation to a person-following robot is presented in Table 1.
LOT | Information provided |
---|---|
1. Perception | Information about the state of the robot and/or contextual information that the user must be aware of. For example – the robot makes a sound or says ’yes’ when it acknowledges the user giving a command |
2. Comprehension | Information about how the state of the robot or the context may affect achieving the goal. For example – the robot verbally says that it is following the user from behind in a distance of 2 meters |
3. Projection | Information about how the future state of the robot may change based on the context. For example – the robot verbally says that in a few meters it will have to slow down due to an obstacle ahead |
Environment-related information which the robot could provide include constraints of the environment, type of environment and any other safety-related information about the environment [5, 20, 23]. Robot-related information includes information from the robot regarding its degree of reliability, underlying principles of its decision making and all other information pertaining to the robot (for example, information on battery status, operating mode, how to use a specific feature on the robot etc) [24].
Human-related information includes the human’s physical and emotional state if the robot can assess it. It also includes information regarding the human’s effort in the task, workload or stress encountered if it can be provided by the robot [25]. Interaction-related information involves details of the roles of the robot and human in the interaction, shared awareness and dynamics of the team-work [25, 26].
Implications of providing information to users was explored in [27]. They suggested that a robot which is truly transparent may contravene the ideology of worthy companionship where the companion has a social value of independence, agency and autonomy to disclose information. The authors hypothesized that as transparency is increased, the user may perceive the robot more as a tool than a companion. This is contrary to the expectation desired in domestic and healthcare settings where the users are expected to interact with the robots as partners, companions and entities capable of caring for them. It was recommended that various levels of transparency in the robot’s communication should be evaluated in a wide range of domestic environments to explore the relationship between transparency, utility and trust for HRI [27].
How the robot communicates is also a crucial component of the interaction in relation to what information is being communicated [27, 28, 29]. The information can be presented in various modes such as audial, visual or haptic modes [11, 30]. It could also be in various other forms of non-verbal modes such as eye blinks, shifts in gaze (for robots with a face) or body posture for humanoid robots [31]. Implicit non-verbal communication positively impacts understandability, efficiency and robustness to errors arising from miscommunication [31]. Transparency often helps to reduce the conflict in joint task situations when such errors occur [31]. The effect of transparency and communication modality on trust was examined in [32]. The modality was not significant in the study which included a simulated robot deployed on a desktop computer, though the transparency manipulation was significant. This interaction differs from interaction with a mobile and embodied robot such as a person-following robot which this current paper focuses on. Also, the users in [32] were undergraduate students (aged 18-22) which have different characteristics and perceptual peculiarities from the older adults. It was recommended that more user studies should be carried out in specific domains in order to determine influence of information level, modality and content on trust.
When discussing strategies to foster transparency between the human and the robot, it was recommended that the interface through which the human interacted with the robot should provide useful information relating to the task and environment [20]. The author cautioned that too much information or a non-intuitive display may cause confusion or frustration for the user [20]. This is in agreement with the findings in [33] where it was additionally noted that multi-modal communication aided performance of the users. Though Kim and Hinds [34] remarked in their study that users understand the robot better if it explains the reasons behind it’s behavior. This was confirmed by [35] in an unmanned aerial system scenario with multiple operators. It was recommended that the hypothesis should be further investigated in other scenarios to determine if the findings vary with the complexity and nature of the task or environment. Studies conducted in [20] added that cues to signify what the robot was doing, its reliability status and the presence of a face on the robot, help the user trust the robot better. It was noted that style and modality of communication with the inclusion of some etiquettes, robotic emotional expressions and gestures in the feedback could influence performance of users using the automated system [20].
Timing of the feedback is also critical to maintain comprehension of the information being communicated [36]. For instance, feedback given too late causes confusion [19]. Temporal immediacy between a user’s input and the robot’s response influences the naturalness of the interaction [37]. To increase the trust of the user it is important to provide continuous feedback regarding the reliability of the robot [38]. This agrees with the findings in [32, 36] related to providing a continuous stream of information. The question of which information to provide continuously and which information to reserve for the user’s demand arises. This often varies based on the type of task, feedback modality, and the type of potential users [39]. The preference of the user regarding feedback timing along with the content and mode of feedback in specific tasks is essential to foster smoother coordination and collaboration between the human and the robot.
In previous studies involving person-following robot applications, most of the developments did not explicitly incorporate feedback from the robot regarding the robot’s actions as it follows. The robot simply followed the target person as soon as the person was detected in a predetermined range as noted in [2]. This caused confusion for many of the participants regarding what the robot was doing per time. Several participants were unsure if the robot was following them, stopping or had lost track. This lack of communication from the robot could lead to a loss of SA which makes the users uncertain or unsure of the state of the interaction at each point in time [40]. This could potentially degrade the interaction quality of the older people with the robot. The few studies that incorporated feedback [41, 42, 43] provided message acknowledging user commands such as saying ‘yes’ or other specific expressions [42]. These were implemented as part of the robot’s behavior without explicit user studies to determine the preferred content, mode or timing of feedback from the robot. There is generally a gap in user-centered preferences in design of feedback for person-following robots [2], particularly those used in eldercare [11].
The current study presents a user-centered design approach to ensure the design is focused on older adults’ needs and preferences. Older adults’ preferences for feedback design were evaluated in a series of empirical studies. Feedback design was constructed consecutively, looking first at the preferred robot LOT (perception, comprehension, projection), and the content of the information to be presented (depending on the LOT), followed by the mode (voice or tone) timing and frequency of the feedback (continuous or discrete). It is crucial to note that while identifying preferred feedback parameters, individual differences come into play [44]. There are several sources of individual differences in older adults which usually have potential implications in the design process [10, 44]. Two of the sources, which were considered in this study were age and gender. The influence of these factors in the feedback design was highlighted through the analyses. The aim was to improve quality of interaction taking into account the different age groups and gender, while ensuring increased user satisfaction and acceptance.
This paper presents comprehensive analysis of our previous study [5] which highlighted the importance of the feedback design considerations but did not provide the details and lacked in-depth analyses. Additionally, we provide new analyses regarding the influence of gender and age on the feedback design. Analyses on influence of pre disposition of the participants to robots before interaction is also included. Finally, the paper presents design guidelines for feedback design in the development of an assistive person-following robot for older adults.
3 Methods
3.1 Overview
The study was constructed with a coactive design perspective. It involved preliminary discussions with older people on their expectations about a robot in the context of person-following. This aligned the thoughts of the potential users and designers into the same conceptual design zone ensuring robot performance is tuned to meet the users’ expectations. The highlights of these preliminary discussions bordered on the overarching goals of the robot such as what the robot does, why and how. This laid the foundation of the intentional model of transparency on which the specific task related model of transparency addressed in the current study was built. Preliminary experiments were also conducted to explore proxemics and movement preferences of the robot as it follows the user in different environmental conditions [5]. These provided some environment related preferences, constraints and context which guided the feedback design options in the current work. In this research, a sequence of experimental user studies with older adults were performed to evaluate step by step:
What level of transparency would the older adults desire and what would they prefer as feedback content at their desired LOT?
Which feedback mode would the older adults prefer?
What would the preferred feedback timing be?
The design parameters gathered in these user studies were implemented and tested in a final experiment to evaluate the effectiveness of the feedback design. This experiment evaluated if the feedback implementation improves the quality of interaction.
3.2 Apparatus
A Pioneer LX mobile robot (50 cm width, 70 cm length and 45 cm height) equipped with an integrated on-board computer, 1.8 GHz Dual Core processor, and 2 GB DDR3 RAM was used.
A built-in SICK S300 scanning laser rangefinder (LRF), mounted approximately 20 cm above the ground, was used to detect nearby obstacles and stop the robot if it detected an object 50 cm from its core. The robot also possesses a Kinect camera with a pan mechanism that was added to the robot and mounted 1.5 m from the ground, as shown in Figure 1. The person tracking and following commands were developed and executed in ROS [45] and were sent to the Pioneer LX’s onboard computer using a TPLINK router with wireless speed up to 300 Mbps.
3.3 Algorithm development
The algorithm works without a map. This is important to ensure flexible operation in a multitude of environments. The map of the environment used in this study is presented in Figure 2. Open Track [46, 47] is used to identify and track the coordinates of the person to be followed. Some adjustments were incorporated to ensure it can detect a human 1.4 m to 2 m tall, with a confidence level threshold of 1.1.
The robot selects the first person detected and moves the robot to the defined position behind the person. The person-following algorithm to move the robot in this manner is described in a previous study [48]. Continuous estimation of the person’s position is achieved using the robot’s pan angle (Rangle) and the angle of the detected person (Pangle), measured from the centre of the robot, to constantly estimate the position of the person. The angular error (Eangle) was measured as the difference between the angle of the person from the centre of the robot. The person’s position (coordinates X,Y) is calculated as follows:
The linear velocity (lvel) of the robot is updated dynamically based on the distance between the robot and the target using the distance proportional controller (Kpdist). The angular velocity (avel) is also updated dynamically based on the angular displacement of the target with the aid of angle proportional controller (Kpangle). These are calculated as follows:
Parameters were set according to recommendations for socially-aware person-following robots [2, 3, 22, 27, 49]. The maximum following speed was set to 1.0 m/s for safety reasons as emphasized in [5, 50, 51]. Other parameters such as acceleration coefficient (implemented using the proportional controller), following distance and following angle were set to 0.5, 0.3 m and 30°, respectively.
The robot’s collision avoidance mechanism was set to stop within the specified following distance (0.5 m). To achieve this, the LRF of the robot estimates the position of the robot to the person or any other obstacle, and proportionally reduces the speed of robot within distances of 0.5m and 1 m. This continues until the robot finally stops at a distance of 0.5 m from the obstacle or person to avoid collision. This is similar to the technique used and described in previous studies conducted [48].
A summary of the algorithm for the person detection and following [4, 52] is presented in Table 2 below.
3.4 User studies
A sequence of user studies was conducted as presented in Figure 3. Each experiment was independent with the outcome of preferences from each experiment implemented in the succeeding experiment.
3.5 Procedure
Before each experimental session, participants completed a preliminary questionnaire. This included demographic information, the Technology Adoption Propensity (TAP) index [53] and the Negative Attitude toward Robots Scale (NARS) [54]. They were then introduced to the robot and to the experimental task. The task was to walk down a straight 25 m path to retrieve an object and place on the robot. The robot was expected to follow the participant and communicate with the participant by voice in English as it follows. The audio feedback was provided directly by the robot’s speakers which produced a sound of approximately 60 dB above the noise level in the building which was about 40 dB. The robot followed at a specified distance, angle and speed (section 3.3). The study took place in a 2.5 m wide corridor of a university laboratory building as seen in the snapshot of Figure 1a and in the map of Figure 2.
A trial refers to each session when the participants interacted with the robot which includes walking the designated path with the robot to retrieve a specified item at a specified location. A video of each trial was taken and saved for analyses where objective measures were carefully assessed. In each experiment there were two experimenters,who documented observations. One of the experimenters took care of explaining instructions to the users while the other experimenter ensured safety of the participants as the robot follows (one of the tasks was to be responsible to stop operation using the emergency button in case of a problem ).
After each trial, a condensed form of Situation Awareness Rating Tool (SART) [55] was used to assess the level of situational awareness and understanding the participants had in each session. This was administered along with some other questions relating to the preference of the participants in each session as used in [30]. The post-trial questionnaire used a 3-point Likert scale with 3 representing "Agree" and 1 representing "Disagree". The 3-point scale was selected based on previous trials with older adults that revealed that they experienced difficulty and sometimes confusion in the process of indicating their opinion/preferences on the 5- or 7-point scales [5]. At the end of all trials, a final questionnaire was provided to enable the participants to express their opinion regarding the experience with the robot. All procedures were approved by the university’s ethical committee.
3.6 Analyses
The analyses were performed using the following objective and subjective measures acquired during the experiments as detailed below:
Objective measures: were assessed by analysis of the videos acquired during the experiment.
Understanding was measured as the number of clarifications made by participants to the experimenters while interacting with the robot. These sometimes created interruptions during the experiment. An interruption in this context is the period during the experiment when the participant does not understand the information the robot is presenting or what the robot is doing and therefore pauses to ask the experimenter questions for clarification regarding the information the robot was giving. Another measure of understanding was the reaction time which was measured as the time it took participants to respond to the robot’s instructions. Effort was measured based on participant’s heartrate before and after each trial. The heartrate was measured with a Garmin watch Forerunner 235 series. The watch was worn by the participants at the start of the experiment till the end of the experiment. The heart rate readings (bpm) was taken at the start and end of each trial and used in the analyses [56, 57]. Engagement was measured based on the duration of gazes the participants made to the robot during communication, total time spent with the robot, and number of times participants initiated communication with the robot while gazing at the robot). Trust was measured via the overall time spent on the task of walking to pick item without looking back at the robot coming behind and the time spent waiting for the robot when the robot lost track or delayed. Comfortability was measured as the number of times participants were glancing back at the robot.
Subjective measures: Users’ responses regarding their level of understanding, comfort, engagement, persuasiveness and satisfaction were assessed through questionnaires and short interviews at the end of each experimental trial.
Data Analyses: The tests were designed as two-tailed with a significance level of 0.05. The model for the analyses was the General Linear Mixed Model (GLMM) with user ID included as a random effect to account for individual differences among participants.
4 Level of transparency and content of feedback
The aim of this experiment was to provide the users sufficient situation awareness without overwhelming them with excess information. The preferable level of transparency was explored along with the appropriate information content.
4.1 Experimental design
Independent Variables: Level of transparency was the independent variable. Three levels of information were presented to the participant by the robot. At the perception level of transparency, the robot communicated to the participant what it was doing (e.g., ‘following’, ‘stopping’). At the comprehension level of transparency, the robot communicated to the participant why it was doing what it was doing (e.g., ‘stopping because the participant stopped, ’stopping because of an obstacle’). At the projection level of transparency, the robot communicated to the participant what it was planning to do next (e.g., ’I will stop whenever you stop’).
Dependent Variables: Preference regarding the amount of information participants wanted the robot to present to them was collected through questionnaires and short interviews that contained specific items related to the participants’ understanding of the robot’s feedback, level of comfort and mental workload while interacting at various levels of transparency. The mental workload assessment was included due to the mental effort that could be required by the older adults to process the information the robot is presenting to them [11, 44]. The robot presented some information to the participants as described earlier. The participants were then asked for their preferences through questionnaires. They were also given the opportunity to add other expressions or information they would want the robot to give in addition to what it was already presenting to them. These responses from the participants were collected through questionnaires and interviews.
Participants: Thirteen older adult participants (8 Females, 5 Males) aged 65-85 were recruited. They were all healthy participants with no physical disability, vision or hearing impairment. A short interview was held with them before the experiment commenced to ascertain their comfort with the experiments and understanding of the procedure. Each participant experienced all three levels of information presentation from the robot in random order. They completed the study separately at different time slots, so there was no contact between participants.
4.2 Results
Analysis on LOT preferences (Figure 4) revealed significant differences among users (M=1.62, SD=0.87, p<0.001). All of the participants preferred the robot to say what it was doing at the moment (LOT level 1). 38% (M =0.38, SD=0.5) of the participants wanted the robot to additionally present the reason for its actions (LOT level 2), while only 23% (M =0.23, SD=0.44) of the participants wanted information on future actions of the robot (LOT level 3).
Participants did not express discomfort or excess workload while interacting with the robot at higher LOTs. They also gave their preferences for specific feedback content from the robot (Figure 5). Several participants wanted it to say more than basic task related information such as ’following’ or ’stopping’. Some wished it would introduce itself and greet them. Most of the participants (85%) also desired that the robot communicates in their native language (Hebrew).
The results provided the rationale for the use of the first LOT (robot’s current action) with specific expressions such as ‘starting’, ‘following’, ‘stopping’ in the next experimental stage. Greetings according to the suggested content (such as ‘hello’ or ‘bye’) during the interaction with the robot were added to the communication to make it friendlier. This modification was implemented for subsequent studies by enabling the users to choose the preferred language of feedback (English or Hebrew).
5 Mode of feedback
The aim of this experiment was to identify the most suitable mode of feedback considering the fact that the robot is a person-following robot which is expected to be positioned behind the user most of the time. This requires the feedback to be audible to the user particularly when following. Two audial feedback modes were explored: a female voice (as recommended in [10] and [11]) and a tone in form of a sequence of beeps (’beep’, ’beep’...). The voice content was: ‘following’, ‘stopping’, and greetings. The voice was in the form of a recorded human speech in order to obtain a sound as close as possible to natural human communication. The beeping started once the robot began to follow and ceased when it stopped. The sound of the voice and tone feedback was maintained at approximately 60 dB, well above background noise level. The volume was made adjustable to the preference of the participant, such that it could be increased or decreased to make it comfortable and audible to the participant in accordance with audial feedback design guidelines provided [44].
The feedback modes were implemented according to design guidelines for general multimodal human-robot interaction [58]. The standards for developers to address the needs of older persons [10, 11] was also consulted in order to satisfy design recommendations for presentation of auditory information. Actual human speech was used instead of synthesized speech based on earlier studies which revealed that it aided higher intelligibility [59]. A native speaker’s recording was used in order to avoid accent-related difficulties in understanding the communication of the robot [60]. The content of the feedback was based on the results obtained in the previous stage.
5.1 Experimental design
Independent Variables: The mode of feedback manipulated as voice mode or tone mode.
Dependent Variables: subjective and objective measures as described in section 3.6.
Participants: Twelve additional older adults’ participants (9 Females, 3 Males) aged 62-73,were recruited. They were physically and cognitively fit for the experiments as described in section 4.1. Each participant received feedback from the robot in both tone and voice modes.
5.2 Results
Analysis revealed that 10 of the participants (77%) preferred the voice feedback mode (M=0.77, SD=0.43) to the tone mode (M=0.08, SD=0.272) and 8% were fine with either of the modes (M=0.15, SD=0.368). This effect of feedback mode on their preference was significant (M=0.92, SD=0.484, p<0.001). Feedback mode had no significant effect on comfort, engagement and persuasiveness. Eight of the 12 participants reported that they were comfortable in both trials. Three of the participants were indifferent. This outcome is presented in Figure 6.
The heart rate variability was also not significantly affected by the feedback mode. A one-way ANOVA using mode of feedback as the fixed factor and user ID as a random effect revealed that the mode of feedback had a statistically significant effect on the users’ understanding (M=2.0, SD=0.938, p<0.001). Voice feedback was therefore used for the subsequent experimental stages.
6 Timing of feedback
The temporal dimension of the feedback preference of the older adults was studied. The transparency level, content and mode of feedback were based on the outcome of the previous stages.
6.1 Experimental design
Independent Variables: the timing of feedback included three timing options: continuous (5 and 10 seconds intervals) and discrete. As an example, in the continuous timing mode (5 seconds interval), the verbal feedback was given continuously, every 5 seconds (e.g., ’following’, ’following’, every 5 seconds). In the discrete timing mode, the feedback was given only at the beginning and at the end of the interaction with the robot. In this mode, the robot would simply inform the participants when it begins the following and inform the participants when it is stopping.
Dependent Variables: the same variables described in section 3.6
Participants: The same 12 participants recruited in 5.1 followed up in this experiment. Each participant received verbal feedback from the robot in the discrete and continuous timing options. They answered brief questions in questionnaire and interview format after the trials regarding which feedback timing they prefer and why.
6.2 Results
Analyses showed that 80% (10) of the participants preferred the continuous feedback (M=0.85, SD=0.366) over the discrete feedback with (M=0.15, SD=0.366). The effect of the feedback timing on the users’ preference was statistically significant (M=1.46, SD=0.756, p<0.001). The effect of feedback timing as a fixed variable on understanding was also statistically significant (M=1.87, SD=0.923, p<0.001). Among those who selected the continuous feedback as their preferred timing mode, 84.6% preferred an interval of 5 seconds (M=0.69, SD=0.468) over 10 seconds (M=0.15, SD=0.366). The reason given was better awareness of what the robot was doing behind them at every point in time. This provided a rationale for the use of continuous feedback at the rate of 5 seconds in the succeeding quality of interaction evaluation experiment.
7 Does the feedback implementation improve the quality of interaction?
The feedback design parameters obtained in the three previous user studies were evaluated to examine their effects on the quality of interaction relative to a person-following robot with no feedback.
Hypotheses: the feedback design implementation will improve the quality of interaction with the specific hypotheses stated as follows assuming that the feedback:
:will improve the engagement of the participants.
:will increase the understanding of the robot for the participants.
:will improve the trust the user has in the robot.
:will improve the comfortability of the participant.
7.1 Experimental design
Independent Variables: There were two groups: one group interacted with the robot without feedback, the other group interacted with the implemented feedback.
Dependent Variables: Quality of interaction was measured both objectively and subjectively in terms of engagement, understanding, trust and comfort [6].
Objective Measures: Engagement, understanding, trust and comfortability were measured as explained in section 3.6. A summary of the objective measures used are presented in Table 3.
Variable | Objective measures |
---|---|
Engagement | Gaze duration (seconds) |
Understanding | Number of interruptions to ask for more clarification (counted) Reaction time (seconds) |
Trust | Overall time spent on the task of walking to pick item without looking back at the robot coming behind (seconds) Time spent waiting for the robot when the robot lost track or delayed (seconds) |
Comfortability | Number of times participants were glancing back at the robot (counted) |
Subjective Measures: Questionnaires and short interviews regarding their comfort level, understanding of the robot’s information, trust and satisfaction as explained in section 3.6.
Participants: 20 older adult participants (13 Females, 7 Males) aged 65-85. They were healthy participants with no major physical disability. Ten of the participants received feedback from the robot while the other 10 received no feedback from the robot. Additional analyses were conducted to explore the potential influence age and gender of the participants could have on the QoI variables assessed. Analyses to evaluate the influence of the predisposition of participants on the QoI was also conducted. The influence of age of the participants was assessed by conducting a correlation analysis between the age of the participants (M=78.84, SD=6.72) and the different QoI variables. Similar analyses were conducted for the correlation between gender and QoI as well as between the responses of participants to the NARs questionnaire and QoI.
Feedback Design: Feedback was designed using the preferred parameters identified in the preceding stages as detailed in Table 4.
Parameter | Preference | Description |
---|---|---|
Level of transparency | Level 1 LOT | Information on what the robot is currently doing. |
Content of feedback | Action of the robot, Friendly content | Specific information such as ‘starting’, ‘following’, ‘stopping’ Greetings from the robot. |
Mode of feedback | Voice feedback | Audible female voice with speech rate less than 140 wpm with adequate pauses at grammatical boundaries. |
Timing of feedback | Continuous feedback (5 seconds interval) | Notification of the state of the robot every 5 seconds (like, ‘following’, ‘following’ ...) |
7.2 Results
7.2.1 Attitude towards technology
Most of the participants were acquainted with the use of innovative technologies (M=3.39, SD=0.72). The TAP index [21] revealed that more than half of the participants were affirmative that technology could provide more control and flexibility in life (M=2.48, SD=1.59). Several of them also showed confidence in learning new technologies (M=2.95, SD=1.18), and trusted technology (M=3.04, SD=1.58). The NARS index [55] revealed that about 60% of the participants felt that if they depended too much on the robot, something bad might happen (M=3.05, SD=1.19).
7.2.2 Engagement
The results revealed an increase in the time (M=3.15, SD=4.38) the participant was focused on the robot while the robot was presenting some information about the interaction before following (F(1,37)=20, p<0.001). In the group where the feedback was implemented, it was observed that participants were willing to spend more time communicating with the robot (M=5.65, SD=4.65) compared to the group without feedback (M=0.53, SD=1.88). There was a 60% improvement in the communication frequency suggesting improved engagement.
Responses from the questionnaire also showed significant differences in the response of the participants related to engagement (M=2.38, SD=0.74, p<0.001). Participants in the group with feedback (M=2.76, SD=0.54) made more positive comments regarding the naturalness of the robot that made them feel more connected to the robot compared to those in the group without feedback (M=1.96, SD=0.72). Several of the participants in the group with feedback expressed excitement at the robot’s communicative ability. Some of the comments made were: ’I was thrilled to hear the robot communicate with me in Hebrew. It helped me relate better with it’, ’the way it spoke every time, telling me what it’s doing made it interesting to interact with’. These comments suggest some form of engagement with the robot.
7.2.3 Understanding
The understanding of the participants improved with the feedback design as expressed by the amount of time (M=2.84, SD=3.81) the participants impeded the flow of the interaction due to clarifications they were making regarding the actions of the robot (F(1,37)=3.7, p<0.062). Participants in the group with feedback experienced a smoother flow in the interaction with minimal interruptions (M=0.75, SD=1.12). Participants in the group without feedback interrupted the flow of the interaction more frequently when they were not certain of what the robot was doing (M=5.05, SD=4.39).
In terms of the reaction time (M=2.2, SD=1.84, p=0.013), participants in the group with feedback (M=2.9, SD=1.94) spent more time (seconds) listening to the instructions from the robot before taking action (F(1,37)=6.76, p<0.013) compared to the group without the feedback design (M=1.47, SD=1.42). Additionally, the participants’ responses in the questionnaires regarding understanding (M=2.32, SD=0.66, p<0.001) showed that the group with feedback (M=2.76, SD=0.54) had a better understanding of the robot than the group without the feedback (M=1.83, SD=0.37).
7.2.4 Trust
Results revealed that the participants focused on the task without worrying about the robot coming from behind when the feedback was implemented (M=76.95, SD=20.08), as seen in the time they spent in the task of picking up an item (M=99.33, SD=32.95). This was statistically significant, (F(1,37)=36.78, p<0.001) compared to the time spent in the group without feedback (M=122.89, SD=26.9). This suggests they gained some level of trust that the robot would not collide with them or cause any harm to them. Participants in the group with feedback (M=0.75, SD=1.12)waited (M=2.85, SD=3.81) less compared to participants in the group without feedback (M=5.05, SD=4.4). This could have been due to better awareness of what the state of the robot was if it was delayed or lost track. This was also statistically significant (F(1,37)=18, p<0.001).
7.2.5 Comfortability
Regarding comfortability as measured by the number of back glances (M=2.9, SD=3.6) the participants made, there wasno significant difference (F(1,37) =0.073, p=0.88) between the participants in the group with feedback (M=3, SD=4.05) and those in the group without feedback (M=2.68, SD=3.15). A significant difference was however found in the comfortability of communicating with the robot (M=2.37, SD=0.7, p=0.004) based on the questionnaire responses. Participants in the group with feedback (M=2.67, SD=0.66) responded more positively regarding comfortability with the robot than those in the group without feedback (M=2.05, SD=0.66).
The influence of the feedback on the QoI variables as measured in the objective variables is presented in Table 5.
Engage | Understand | Trust | Comfort | ||
---|---|---|---|---|---|
NFD | Mean | 0.53 | 1.47 | 122.89 | 2.68 |
SD | 1.88 | 1.42 | 26.9 | 3.15 | |
WFD | Mean | 5.65 | 2.9 | 76.95 | 3.00 |
SD | 4.65 | 1.94 | 20.08 | 4.05 | |
Sig. | <0.01** | 0.01* | <0.01** | 0.88 |
*p<0.05, **p<0.01, NFD = No feedback,
WFD = With feedback, Engage = Engagement,
Understand = Understanding, Comfort = Comfortability
7.2.6 Influence of initial attitude of participants
Correlation analyses were conducted to explore the possible relationships between the predisposition of the participants in form of NARS index and the objective variables. These were conducted using Pearson’s Correlation Coefficient analyses to determine the trend, significance and effect size. A significant positive correlation was observed between the NARS index of the participants and engagement (r=0.51, n=20, p=0.021). Participants who had more negative reaction towards the robot gazed more intently at the robot.
There was also significant positive correlation between the NARS index of participants and the level of understanding in terms of number of interruptions made to ask for clarification (r=0.491, n=20, p=0.028) and reaction time (r=0.448, n=20, p=0.047). Participants who were more negatively disposed to the robot seemed to ask more questions about the robot and also had a longer reaction time to the robot’s requests.
There was a negative correlation between the NARS index of participants and the trust the participants had in the robots. The more negative predisposition the participants had regarding the robot, the less they trusted the robot as observed in the duration of time they spent on the task with the robot (r=-0.558, n=20, p=0.01) and the duration when they waited for the robot (r=-0.362, n=20, p=0.116).
The relationship between the NARS index of participants and their comfortability was positive but was not significant (r=0.071, n=20, p=0.763).
7.2.7 Influence of age group and gender
There wasa statistically significant positive correlation between the age and the engagement as measured by gaze duration (M=3.84, SD=4.34, r=0.56, n=20, p=0.004). This is depicted in Figure 7. As age increases, the participants tend to gaze more at the robot during communication.
There was also a trend between the age and the understanding of the participants as assessed in terms of the number of clarifications made by the participants (M=0.4, SD=0.5) and the reaction time in seconds (M=2.27, SD=1.86). The correlation between the age of the participants and the number of clarifications made by the participants was not significant (r=0.097, n=20, p=0.568) but there was a fairly significant positive correlation between age and reaction time, (r=0.32, n=20, p=0.056). As the age increased, slower response to the robot’s instructions were observed. This is presented in Figure 7.
The trend between age and trust, which was measured in terms of the duration when the participants waited for the robot when the robot lost track (M=2.4, SD=3.79), was also explored (Figure 8). It was observed that that there is a significant negative correlation between age and the waiting duration (r=-0.443, n=20, p=0.027). There was no significant correlation between the age of the participants and their level of comfortability as assessed by the number of back glances made to the robot while walking (M=2.9, SD=3.6, r=-0.287, n=20, p=0.164) and the total time they spent with the robot (M=2.9, SD=3.6, r=-0.231, n=20, p=0.267) but the analysis reveals some negative trend with age (Figure 8).
With regards to gender analyses, the females (M=7.6, SE=1.36) seemed more engaged (F(1,13)=4.5, p=0.054) as seen in the gaze duration (M=5.51, SE=0.86) compared to the males (M=4, SE=1.01). The males (M=2.05, SE=0.72) also seemed to trust the robot less than the females (M=3.94, SE=1.85) as observed in the duration of time spent (M=2.84, SE=0.837) with the robot, (F(1,12)=0.898, p=0.362). In terms of understanding, as assessed through the number of clarifications made (M=0.61, SE=0.13), it was observed that the females asked more questions than the males, but this was not statistically significant, (F(1,23)=0.123, p=0.729). The differences in the level of comfortability each gender experienced as assessed by the number of times they glanced back at the robot (M=2.23, SE=0.675) and the amount of time they spent with the robot (M=93.24, SE=9.11, p=0.934)was also not statistically significant (F(1, 23)=0.007, p=0.934).
8 Design guidelines
This is a first attempt to explore feedback parameters via a series of user studies focusing on a person-following robot application for older adults. This sequential user-centered study aimed to ascertain the older adults’ user needs and preferences regarding feedback from the robot for this defined task of person-following. The implications of this series of sequential studies as relating to improved feedback design are presented in the following subsections.
8.1 Transparency considerations
Users prefer information on what the robot is doing (LOT 1). Hence, they do not need the robotic system to be fully transparent, rather they want it to be current and immediate. They are satisfied with the robot communicating just its current actions and status information. Older adults seem to trust that the robot will know how to handle itself if more information is available or if the state of matters will change despite their initial disposition to the robot as revealed in the NARS index. This agrees with the discussions in [27] where it was hypothesized that the users may prefer less information based on the degree of trust they have developed in the system. Users’ preference in our first study also concurs with the design principles for transparency, outlined in [20] where designers were cautioned regarding providing too much information to users. It was emphasized that if such information exceeds the preferences and needs of the users, it may bring frustration and/or confusion. It also agrees with findings in [36]which noted that providing too much information results in information overload and decrease of users’ performance.
In order not to limit the participants to receive feedback only for task-related transparency options, participants were asked to suggest additional information that they would want the robot to give. This was to make room for other aspects of transparency relating to the robot such as information on how the robot makes its decisions or principles guiding its actions. The discussion was also intended to address environment-related feedback content such as structure of the environment, constraints in the environment and safety-related information about the environment. Caution was taken to avoid overloading the participants with too many transparency options. Therefore, transparency models connected with teamwork (information on the role of the robot and human in task), human state (information regarding physical, emotional or stress state of the participant) were not mentioned. Participants were asked to point out specific content they would like the robot to give. Participants’ responses (Figure 5) indicated that they were interested in task-related transparency information (such as ‘following’, ‘stopping’). In addition, some participants wanted the robot to ask about their wellbeing (greetings, e.g., ‘how are you?’). These are aspects of the human-related model of transparency which participants provided without being directly asked. It supports the significance of ‘thinking aloud’ sessions recommended in user-centered system designs [28]. The preference for greetings also supports the finding of Sabelli et al. [61].
The outcome of the first stage also brings to the fore an interesting contrast in the LOT demands of younger and older adults. In a previous study [32], earlier discussed, where younger adults (aged 18-22) participated in a user study to examine the effect of transparency and feedback modality on trust, they preferred higher LOTs. This may not only be an age-related trust issue but may also be connected with the embodiment of the robot. The robot in [32] was simulated on a computer desktop and not physically present as used in the current study. This suggests that interacting with a physical robot and observing its performance may have a stronger effect on the users’ trust and affect the amount of information (LOT) such user may prefer the robot to provide compared to a simulated robot.
The population in this study may be unique in their LOT demands, but we cannot assure this conclusion since the studies were composed of a specific task. To establish a stronger mapping between the preferences in this study and that of a wider population of older adults, more extensive studies are recommended as suggested in [62]. These further studies would assess the external validity of this outcome on a larger scale. Studies that examine the possible changes in users’ transparency demands such as trust and comfortability adaptation for interacting with a robot occurs over longer periods of interaction.
8.2 Feedback modality considerations
Users prefer the robot to communicate with them in voice mode. The voice, as compared to tone-mode, tends to give the robot a form of personality which enables users to better envision it as an assistant or partner than just a mere machine. This tallied with the findings in [63] where it was highlighted that verbal mode improves perceptions of friendship and social presence. Even though, the outcome in [32] seemed to portray that feedback modality was not significant, the task was different which emphasizes the importance of evaluating the feedback design parameters in specific tasks to ensure applicability to such tasks as recommended in the study [32] and in [27]. This also agrees with the recommendations in [20], regarding designing communicative interfaces to ensure that the feedback modality fits the needs and preference of the user in defined tasks. In the current task, where the robot follows the user, the feedback modality (voice feedback) tends to keep the users more engaged with the robot which is one of the variables that indicates a potential of improvement in the quality of interaction.
Identifying a primary means of feedback was crucial in this study particularly in connection with the preferred content of the feedback. Providing multiple modalities can be explored as a next stage, with the possibility of including haptic feedback. However, considerations must be made regarding the cognitive peculiarities of the older adult users which influence the number of sources of information they can process per time [10], [58]. It is pertinent that the older adults are not overloaded with information. There is the potential of adaptable modality selection [36, 39, 64] which may provide the option of user-defined modality preferences based on the complexity of task, human physical or cognitive state, performance and environmental related factors. This would give the older adult allowance to further personalize their feedback modality preferences which aligns with the goal of meeting the needs, preferences, capabilities and limitations of users [39, 44, 65].
8.3 Feedback timing considerations
Continuous feedback, at short intervals, was preferred by the participants. It seemed to provide them with better awareness regarding the state of the interaction compared to discrete feedback used in previous studies. This was in conformity with previous studies where continuous feedback timing was found to improve users’ awareness [36, 38, 66] even though these studies were not focused on older adult users. The outcome of this stage therefore highlights a crucial feedback design component of providingminimal information (LOT1) continuously at short intervals. We would however recommend that this preference be treated with caution, as preference of the users could change with the complexity of the task or the duration of interaction. On a long term basis, participants may adapt their level of trust in the robot and this may make them rely on the robot more, such that longer intervals between continuous feedback messages may be preferred. Different degrees of involvement of the robot in the task may also influence the frequency of information required by the participants [20, 26]. Users may require information less frequently from a robot that is more autonomous than one which is more dependent on the user for each action. This concept of the influence of the robot’s level of autonomy on feedback timing in the context of person-following task for older adults requires further exploration.
8.4 Predisposition considerations
The results observed from the correlation analyses of the effect of the initial attitude of participants as indicated by the NARS responses revealed the impact that the predisposition of the participants towards robots could have on their interaction with the robot. This reflects previous findings [55] where it was explained that the initial attitude of the participants affected the manner they evaluated the robot which then influences the interaction [55]. Bishop et al. [67] had highlighted the negative influence that subjective negative affect could potentially have in the interaction with a robot. This could be responsible for the trends seen where participants who had a more negative initial attitude towards the robot even before interaction (with or without the feedback) seemed to understand less and also trust it less.
It is therefore important to include some form of introductory session by the robot to better prepare the older adult for the interaction. The feedback design parameters and interfaces should make allowance for such user-friendly initial introductions before the actual task implementation with the robot. The older adults should however, also be given the option to skip this session if they are already familiar with the robot so that this introduction session does not induce boredom in the interaction. Such a session can also help overcome the novelty effect [68] and provide basic training so as to ensure focus is on the specific study parameters.
8.5 Gender and age considerations
Results revealed that age and gender could influence the perception, preferences and attitudes of the participants towards the robot. Even though, there may be some intra-individual differences that may also be responsible for some of the observations made [44], results in this study revealed that the inter-individual differences stemming from age and gender are worth considering in the design. Thus, in the process of developing a user-centered feedback design, the preferences of women should be considered differently from that of the men. The feedback parameters should be tuned to suit the preferences of older adult users in different age categories.
For example, regarding engagement, the trend reveals that the older old adults tend to be more engaged with the robot compared to their younger counterparts as seen in the gaze duration analyses. This could be connected with novelty effect where a larger percentage of the younger old adults may have been more familiar with some form of related technologies compared to the older old adults [7]. This could inspire more attraction to the robot, and thus engage them more. This agrees with previous findings [67] where it was shown that familiarity with related technology negatively correlates with the attitude and intention to interact with the robot. Those who were more familiar with related technology may find the interaction less enjoyable and thus may not be as engaged as those who are less familiar [67]. Attention has to be placed on measures to improve the engagement in the younger older adult category.
It was also understandable that the older-old adults had a longer reaction time when interacting with the robot as seen in the correlation of age with understanding. Several older old adults do not have as much experience with technology as the younger old adults as observed in Heerink’s study [69]. It was additionally established in the study that experience with related technology aids the use of a system [69]. This could explain the reason why the younger old adults who likely had more experience with technology seemed to have a better understanding of the robot’s operation compared to the older adults. It also emphasizes the importance of adapting some of the feedback parameters such as clarity, repetition of instruction, rate of feedback to aid the understanding of the older old adults.
The older old adults seemed to trust the robot more than the younger old adults as seen in their waiting time. This also agrees with the discussions in [67] where it was stated that younger users who may be more familiar with related technology were more aware of the robot’s limitations and therefore may have felt less safe around the robot. This could affect the trust the more familiar people felt around the robot. Even though Broadbent et al. [70] mentioned that some older people may show more negative emotions towards robots, this study reveals that familiarity may not necessarily improve the trust index. However, if the robot constantly informs the user on its capabilities, this could have some influence on the users’ willingness to trust the capabilities of the robot as observed in [69]. It also brings an important consideration regarding feedback design to the fore: informing users of the capabilities of the robot and demonstrating such capabilities. This form of communicative attribute coupled with reliable performance of the robot as emphasized by Hancock et al. [71] in their study of factors influencing trust in HRI, can potentially help the users trust the robot better. It may also improve the comfortability trend at all age categories as established in the Almere model [72] showing perceptual influences on acceptance of a robot by older adults.
Gender had also been found in previous studies to have a significant influence on the interaction with the robot [67, 69, 73]. This was confirmed in the current study where the females were more engaged to the robot than the males and also seemed to ask more questions to clarify their understanding of the robot better. The females also seemed to trust the robot more as seen in the time they waited for the robot. They seem to trust that the robot would perform correctly even whenit delayed or lost track. Even though,Heerink [72] and de Graff [73] associated anxiety with the females’ interaction with a robot, the current study agrees with earlier findings by Shibata et al. [74] where it was stated that females are more comfortable around robots. This could potentially influence trust positively. Several reasons could be responsible for this disparity which includes context and type of robot. However, the reason cannot be fully established from this study due to the limited sample. But it highlights the need to further explore the expectations and needs of the different genders such that the feedback design could be tuned to meet possible gender preferences.
Gender and age category of the older adult users should therefore be adequately considered in order to meet the specific needs in the different groups that make up the older adults’ population.
8.6 Design implications, limitations and future work
While evaluating the effectiveness of the feedback design, we observed that the users were more engaged with the robot, understood the robot more, and better trusted the robot when it communicated with them using the implemented feedback. This confirms hypotheses H1 – H3. Even though analyses of the objective measures did not confirm hypothesis H4, the responses of the participants in the questionnaires showed that they were more comfortable communicating with the robot when the feedback was implemented. The feedback was designed to match the perceptual demands of the target users. The outcome supports the proposition in literature that such user-centered feedback design can increase the quality of interaction.
One of the limitations of this study is that the feedback design was evaluated for a single task scenario. The feedback was not evaluated in multiple task situations with varying environmental variables such as noise and space type. Evaluation of the feedback design parameters is also recommended for an extended period of time in order to assess the preferences of the older adults as the novelty effect wears off. It is also recommended that training be conducted for older adult users as naïve users, regarding how the feedback interfaces operate in order to maximize the interaction quality. These are crucial factors that should be considered in future work to improve the robustness of the feedback design. The outcome of this study provides some guidelines and recommendations that could be useful while conducting more extensive studies on feedback design in person-following robots that will accommodate user needs in eldercare. Ongoing research is aimed to advance these studies to other tasks, robot types and populations.
Acknowledgment
The authors wish to thank Shanee H. Honig who contributed to the development of the experimental testbed in previous studies.
Funding: This research was supported by the EU funded Innovative Training Network (ITN) in the Marie Skłodowska-Curie People Programme (Horizon2020): SOCRATES (Social Cognitive Robotics in a European Society training research network), grant agreement number 721619 and by the Ministry of Science Fund, grant agreement number 47897. Partial support was provided by Ben-Gurion University of the Negev through the Helmsley Charitable Trust, the Agricultural, Biological and Cognitive Robotics Initiative, the Marcus Endowment Fund, the Center for Digital Innovation research fund, the Rabbi W.Gunther Plaut Chair in Manufacturing Engineering and the George Shrut Chair in Human Performance Management.
References
[1] D. Tang, B. Yusuf, J. Botzheim, N. Kubota, and C. S. Chan, “A novel multimodal communication framework using robot partner for aging population,” Expert Syst. Appl. vol. 42, no. 9, pp. 4540–4555, 2015.10.1016/j.eswa.2015.01.016Search in Google Scholar
[2] S. S. Honig, T. Oron-Gilad, H. Zaichyk, V. Sarne-Fleischmann, S. Olatunji, and Y. Edan, “Toward socially aware person-following robots,” IEEE Trans. Cogn. Dev. Syst. vol. 10, no. 4, pp. 936–954, 2018.10.1109/TCDS.2018.2825641Search in Google Scholar
[3] H. Sidenbladh, D. Kragic, and H. I. Christensen, “A person following behaviour for a mobile robot,” in Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C) IEEE, vol. 1, pp. 670–675, 1999.Search in Google Scholar
[4] S. S. Honig, D. Katz, T. Oron-Gilad, and Y. Edan, “The influence of following angle on performance metrics of a human-following robot,” in 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (New York, USA), IEEE, pp. 593–598, 2016.Search in Google Scholar
[5] S. Olatunji, V. Sarne-Fleischmann, S. S. Honig, T. Oron-Gilad, and Y. Edan, “Feedback design to improve interaction of person-following robots for older adults,” 2018. Available: https://pdfs.semanticscholar.org/2e1e/e201a1de05885e102b0ffidc5dabefefd5531.pdf.Search in Google Scholar
[6] S. Bensch, A. Jevtić, and T. Hellström, “On interaction quality in human-robot interaction,” in Proc. 9th Int. Conf. Agents Artif. Intell. ICAART (Porto, Portugal), vol. 1, pp. 182–189, 2017.10.5220/0006191601820189Search in Google Scholar
[7] O. Zafrani and G. Nimrod, “Towards a holistic approach to studying human-robot interaction in later life,” Gerontologist vol. 59, no. 1, pp. e26–e36, January 2019.10.1093/geront/gny077Search in Google Scholar PubMed
[8] C. Owsley and G. McGwin Jr., “Association between visual attention andmobility in older adults,” J. Am. Geriatr. Soc. vol. 52, no. 11, pp. 1901–1906, November 2004.10.1111/j.1532-5415.2004.52516.xSearch in Google Scholar PubMed
[9] I. Leite, C. Martinho, and A. Paiva, “Social robots for long-term interaction: a survey,” Int. J. Soc. Robot. vol. 5, no. 2, pp. 291–308, 2013.10.1007/s12369-013-0178-ySearch in Google Scholar
[10] European Committee for Standardization, Cen and European Committee for Electrotechnical Standardization, Cenelec, “Guidelines for standards developers to address the needs of older persons and persons with disabilities,” Eur. Comm. Stand. Eur. Comm. Electrotech. Stand. vol. 6, no. 2, pp. 1–31, 2002.Search in Google Scholar
[11] T. L. Mitzner, C.-A. Smarr, W. A. Rogers, and A. D. Fisk, “Adult’s perceptual abilities,” in The Cambridge Handbook of Applied Perception Research pp. 1051–1079, 2015.10.1017/CBO9780511973017.061Search in Google Scholar
[12] E. Claude, “Shannon. Communication theory of secrecy systems,” Bell Syst. Tech. J. vol. 28, no. 4, pp. 656–715, 1949.10.1002/j.1538-7305.1949.tb00928.xSearch in Google Scholar
[13] D. Doran, S. Schulz, and T. R. Besold, “What does explainable AI really mean? A new conceptualization of perspectives,” arXiv:1710.00794, 2017.Search in Google Scholar
[14] N. Balfe, S. Sharples, and J. R. Wilson, “Wilson. Understanding is key: An analysis of factors pertaining to trust in a real-world automation system,” Hum. Factors vol. 60, no. 4, pp. 477–495, June 2018.10.1177/0018720818761256Search in Google Scholar PubMed PubMed Central
[15] T. Hellström and S. Bensch, “Understandable robots-what, why, and how,” Paladyn, J. Behav. Robot. vol. 9, no. 1, pp. 110–123, 2018.10.1515/pjbr-2018-0009Search in Google Scholar
[16] S. Olatunji, T. Oron-Gilad, and Y. Edan, “Increasing the understanding between a dining table robot assistant and the user,” in Proceedings of the The International PhD Conference on Safe and Social Robotics (SSR-2018) (Madrid, Spain), EU Horizon2020 projects – SOCRATES and SECURE, 2018.Search in Google Scholar
[17] N. Mirnig and M. Tscheligi, “Comprehension, Coherence and Consistency: Essentials of Robot Feedback,” in Robots that Talk and Listen. Technology and Social Impact J. A. Markowitz (ed.), De Gruyter, 2014, pp. 149–171.Search in Google Scholar
[18] J. Y. C. Chen, S. G. Lakhmani, K. Stowers, A. R. Selkowitz, J. L. Wright, and M. Barnes, “Situation awareness-based agent transparency and human autonomy teaming effectiveness,” Theor. Issues Ergon. Sci. vol. 19, no. 3, pp. 259–282, 2018.10.1080/1463922X.2017.1315750Search in Google Scholar
[19] N. Mirnig, A. Weiss, and M. Tscheligi, “A communication structure for human-robot itinerary requests,” in Proc. 6th Int. Conf. Human Robot Interaction (Lausanne, Switzerland), ACM, 2011, pp. 205–206.10.1145/1957656.1957733Search in Google Scholar
[20] J. B Lyons, “Being transparent about transparency: A model for human-robot interaction,” in 2013 AAAI Spring Symposium Series Stanford University, USA, 2013.Search in Google Scholar
[21] J. Y. Chen, K. Procci, M. Boyce, J.Wright, A.Garcia, and M. Barnes, “Situation awareness-based agent transparency,” Technical report, Army research lab Aberdeen proving ground MD human research and engineering. . . , 2014.10.21236/ADA600351Search in Google Scholar
[22] M. R. Endsley, “Toward a theory of situation awareness in dynamic systems: situation awareness,” Human Factors vol. 37, no. 1, pp. 32–64, March 1995.10.1518/001872095779049543Search in Google Scholar
[23] M. Cristani, G. Paggetti, A. Vinciarelli, L. Bazzani, G. Menegaz, and V.Murino, “Towards computational proxemics: Inferring social relations from interpersonal distances,” in 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing (Boston, MA), IEEE, 2011, pp. 290–297.10.1109/PASSAT/SocialCom.2011.32Search in Google Scholar
[24] A. Theodorou, R. H. Wortham, and J. J. Bryson, “Designing and implementing transparency for real time inspection of autonomous robots,” Connect. Sci. vol. 29, no. 3, pp. 230–241, 2017.10.1080/09540091.2017.1310182Search in Google Scholar
[25] T. Inagaki, “Smart collaboration between humans and machines based on mutual understanding,” Annu. Rev. Contr. vol. 32, no. 2, pp. 253–261, 2008.10.1016/j.arcontrol.2008.07.003Search in Google Scholar
[26] R. Parasuraman, T. B. Sheridan, and C. D.Wickens, “A model for types and levels of human interaction with automation,” IEEE Trans. Syst. Man Cybern. A Syst. Hum. vol. 30, no. 3, pp. 286–297, May 2000.10.1109/3468.844354Search in Google Scholar PubMed
[27] R. H. Wortham and A. Theodorou, “Robot transparency, trust and utility,” Connect. Sci. vol. 29, no. 3, pp. 242–248, 2017.10.1080/09540091.2017.1313816Search in Google Scholar
[28] T. Fong, N. Cabrol, C. Thorpe, and C. Baur, “A personal user interface for collaborative human-robot exploration,” in 6th International Symposium on Artificial Intelligence, Robotics, and Automation in Space (iSAIRAS) (Montreal, Canada), 2001.Search in Google Scholar
[29] A. Eliav, T. Lavie, Y. Parmet, H. Stern, and Y. Edan, “Advanced methods for displays and remote control of robots,” Appl. Ergon. vol. 42, no. 6, pp. 820–829, November 2011.10.1016/j.apergo.2011.01.004Search in Google Scholar PubMed
[30] N.Markfeld, “Feedback design for older adults in robot assisted table setting task,” Master’s thesis, Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, 2019.Search in Google Scholar
[31] C. Breazeal, C. D. Kidd, A. L. Thomaz, G. Hoffman, and M. Berlin, “Effects of nonverbal communication on efficiency and robustness in human-robot teamwork,” in 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems(Alberta, Canada), IEEE, 2005, pp. 708–713.10.1109/IROS.2005.1545011Search in Google Scholar
[32] T. L. Sanders, T. Wixon, K. E. Schafer, J. Y. C. Chen, and P. A. Hancock, “The influence of modality and transparency on trust in human-robot interaction,” in 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA) (Las Vegas, NV, USA), IEEE, 2014, pp. 156–159.10.1109/CogSIMA.2014.6816556Search in Google Scholar
[33] V. Finomore, K. Satterfield, A. Sitz, C. Castle, G. Funke, T. Shaw, and M. Funke, “Effects of the multi-modal communication tool on communication and change detection for command and control operators,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Boston,MA, USA), SAGE Publications Sage CA: Los Angeles, CA, 2012, vol. 56, pp. 1461–1465.10.1177/1071181312561410Search in Google Scholar
[34] T. Kim and P. Hinds,” Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction,” in ROMAN2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication IEEE, 2006, pp. 80–85.10.1109/ROMAN.2006.314398Search in Google Scholar
[35] E. A. Cring and A. G. Lenfestey, “Architecting human operator trust in automation to improve system effectiveness in multiple unmanned aerial vehicles (UAV) control,” Technical report, Air Force Inst. of Tech. Wright-Patterson AFB OH Graduate School of. . . , 2009.Search in Google Scholar
[36] G. Doisy, J. Meyer, and Y. Edan, “The impact of human–robot interface design on the use of a learning robot system,” IEEE Trans. Hum. Mach. Syst. vol. 44, no. 6, pp. 788–795, 2014.10.1109/THMS.2014.2331618Search in Google Scholar
[37] K. Fischer, K. Lohan, J. Saunders, C. Nehaniv, B. Wrede, and K. Rohlfing, “The impact of the contingency of robot feedback on HRI,” in 2013 International Conference on Collaboration Technologies and Systems (CTS) (San Diego, CA, USA), IEEE, 2013, pp. 210–217.10.1109/CTS.2013.6567231Search in Google Scholar
[38] S. Agrawal and H. Yanco, “Feedback methods in HRI: Studying their effect on real-time trust and operator workload,” in Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Chicago, IL, USA), ACM, 2018, pp. 49–50.10.1145/3173386.3177031Search in Google Scholar
[39] J. Hoecherl, M. Schmargendorf, B.Wrede, and T. Schlegl, “User-centered design of multimodal robot feedback for cobots of human-robot working cells in industrial production contexts,” in ISR 2018; 50th International Symposium on Robotics (Munich, Germany), VDE, 2018, pp. 1–8.Search in Google Scholar
[40] M. R. Endsley and D. B. Kaber, “Level of automation effects on performance, situation awareness and workload in a dynamic control task,” Ergonomics vol. 42, no. 3, pp. 462–492, March 1999.10.1080/001401399185595Search in Google Scholar PubMed
[41] H. Zender, P. Jensfelt, and G.-J. M. Kruijff, “Human-and situation-aware people following,” in RO-MAN 2007 – The 16th IEEE International Symposium on Robot and Human Interactive Communication (Jeju Island, Korea), IEEE, 2007, pp. 1131–1136.10.1109/ROMAN.2007.4415250Search in Google Scholar
[42] R. Gockley, J. Forlizzi, and R. Simmons, “Natural person-following behavior for social robots,” in Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (Arlington, Virginia, USA), ACM, 2007, pp. 17–24.10.1145/1228716.1228720Search in Google Scholar
[43] H.-M. Gross, A. Scheidig, K. Debes, E. Einhorn, M. Eisenbach, S. Mueller, et al., “Roreas: Robot coach forwalking and orientation training in clinical post-stroke rehabilitation – prototype implementation and evaluation in field trials,” Auton. Robots vol. 41, no. 3, pp. 679–698, 2017.10.1007/s10514-016-9552-6Search in Google Scholar
[44] S. J. Czaja, W. R. Boot, N. Charness, and W. A. Rogers, Designing for Older Adults: Principles and Creative Human Factors Approaches CRC press, 2019.10.1201/b22189Search in Google Scholar
[45] M. Quigley, B. Gerkey, and W. D. Smart, Programming Robots with ROS: a practical introduction to the Robot Operating System O’Reilly Media, Inc., 2015.Search in Google Scholar
[46] M. Munaro, A. Horn, R. Illum, J. Burke, and R. B. Rusu, “OpenPTrack: people tracking for heterogeneous networks of color-depth cameras,” in IAS-13Workshop Proceedings: 1st Intl. Workshop on 3D Robot Perception with Point Cloud Library (Padova, Italy), 2014, pp. 235–247.Search in Google Scholar
[47] M.Munaro and E. Menegatti, “Fast rgb-d people tracking for service robots,” Auton. Robots vol. 37, no. 3, pp. 227–242, 2014.10.1007/s10514-014-9385-0Search in Google Scholar
[48] G. Doisy, A. Jevtic, E. Lucet, and Y. Edan, “Adaptive person-following algorithm based on depth images and mapping,” in Proc. of the IROS Workshop on Robot Motion Planning (Vilamoura, Algarve, Portugal), 2012, vol. 20, no. 12.Search in Google Scholar
[49] C. Piezzo and K. Suzuki, “Feasibility study of a socially assistive humanoid robot for guiding elderly individuals during walking,” Future Internet vol. 9, no. 3, art. 30, 2017.10.3390/fi9030030Search in Google Scholar
[50] J. Miura, J. Satake, M. Chiba, Y. Ishikawa, K. Kitajima, and H. Masuzawa, “Development of a person following robot and its experimental evaluation,” in Proceedings of the 11th International Conference on Intelligent Autonomous Systems (Ottawa, Canada), 2010, pp. 89–98.Search in Google Scholar
[51] C. Bassani, A. Scalmato, F. Mastrogiovanni, and A. Sgorbissa, “Towards an integrated and human friendly path following and obstacle avoidance behaviour for robots,” in 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (New York, USA), IEEE, 2016, pp. 599–605.10.1109/ROMAN.2016.7745179Search in Google Scholar
[52] D. Katz, “Development of algorithms for a human following robot equipped with kinect vision and laser sensors in an unknown indoor environment with obstacles and corners,” Master’s thesis, Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, 2016.Search in Google Scholar
[53] M. Ratchford and M. Barnhart, “Development and validation of the technology adoption propensity (tap) index,” J. Bus. Res. vol. 65, no. 8, pp. 1209–1215, 2012.10.1016/j.jbusres.2011.07.001Search in Google Scholar
[54] D. S. Syrdal, K. Dautenhahn, K. L. Koay, and M. L. Walters, “The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study,” Adaptive and Emergent Behaviour and Complex Systems 2009.Search in Google Scholar
[55] R. M. Taylor, “Situational awareness rating technique (sart): The development of a tool for aircrew systems design,” in Situational Awareness Routledge, 2017, pp. 111–128.10.4324/9781315087924-8Search in Google Scholar
[56] P. Rani, N. Sarkar, C. A. Smith, and L. D Kirby, “Anxiety detecting robotic system – towards implicit human-robot collaboration,” Robotica vol. 22, no. 1, pp. 85–95, 2004.10.1017/S0263574703005319Search in Google Scholar
[57] P. Rani, J. Sims, R. Brackin, and N. Sarkar, “Online stress detection using psychophysiological signals for implicit human-robotcooperation,” Robotica vol. 20, no. 6, pp. 673–685, 2002.10.1017/S0263574702004484Search in Google Scholar
[58] N. Mirnig, B. Gonsior, S. Sosnowski, C. Landsiedel, D. Wollherr, A. Weiss, et al., “Feedback guidelines for multimodal human-robot interaction: How should a robot give feedback when asking for directions?” in 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication (Paris, France), IEEE, 2012, pp. 533–538.10.1109/ROMAN.2012.6343806Search in Google Scholar
[59] M. K. Pichora-Fuller, C. E. Johnson, and K. E. J. Roodenburg, “The discrepancy between hearing impairment and handicap in the elderly: Balancing transaction and interaction in conversation,” J. Appl. Commun. Res. vol. 26, no. 1, 99–119, 1998.10.1080/00909889809365494Search in Google Scholar
[60] A. N. Burda, J. A. Scherz, C. F. Hageman, and H. T. Edwards, “Age and understanding speakers with Spanish or Taiwanese accents,” Percept. Mot. Skills vol. 97, no. 1, pp. 11–20, August 2003.10.2466/pms.2003.97.1.11Search in Google Scholar PubMed
[61] A. M. Sabelli, T. Kanda, and N. Hagita, “A conversational robot in an elderly care center: an ethnographic study,” in 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (Lausanne, Switzerland), IEEE, 2011, pp. 37–44.10.1145/1957656.1957669Search in Google Scholar
[62] D. Porfirio, A. Sauppé, A. Albarghouthi, and B. Mutlu, “Computational tools for human-robot interaction design,” in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (Daegu, Korea), IEEE, 2019, pp. 733–735.10.1109/HRI.2019.8673221Search in Google Scholar
[63] E. C. Grigore, A. Pereira, I. Zhou, D. Wang, and B. Scassellati, “Talk to me: Verbal communication improves perceptions of friendship and social presence in human-robot interaction,” in International Conference on Intelligent Virtual Agents (Los Angeles, USA), Springer, 2016, pp. 51–63.10.1007/978-3-319-47665-0_5Search in Google Scholar
[64] A. Taranović, A. Jevtić, and C. Torras, “Adaptable multimodal interaction framework for robot assisted cognitive training,” in Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Madrid, Spain), ACM, 2018, pp. 327–328.10.1145/3173386.3176911Search in Google Scholar
[65] N. Doering, S. Poeschl, H.-M. Gross, A. Bley, C. Martin, and H.-J. Boehme, “User-centered design and evaluation of a mobile shopping robot,” Int. J. Soc. Robot. vol. 7, no. 2, pp. 203–225, 2015.10.1007/s12369-014-0257-8Search in Google Scholar
[66] C.-S. Lu and C.-S. Yang, “Safety leadership and safety behavior in container terminal operations,” Saf. Sci. vol. 48, no. 2, pp. 123–134, 2010.10.1016/j.ssci.2009.05.003Search in Google Scholar
[67] L. Bishop, A. Van Maris, S. Dogramadzi, and N. Zook, “Social robots: The influence of human and robot characteristics on acceptance,” Paladyn, J. Behav. Robot. vol. 10, no. 1, pp. 346–358, January 2019.10.1515/pjbr-2019-0028Search in Google Scholar
[68] S. Šabanović, C. C. Bennett, W.-L. Chang, and L. Huber, “Paro robot affects diverse interaction modalities in group sensory therapy for older adults with dementia,” in 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR) IEEE, 2013, pp. 1–6.10.1109/ICORR.2013.6650427Search in Google Scholar PubMed
[69] M. Heerink, “Exploring the influence of age, gender, education and computer experience on robot acceptance by older adults,” in Proceedings of the 6th international conference on Human-robot interaction (New York, NY, USA), ACM, 2011, pp. 147–148.10.1145/1957656.1957704Search in Google Scholar
[70] E. Broadbent, R. Stafford, and B. MacDonald, “Acceptance of healthcare robots for the older population: review and future directions,” Int. J. Soc. Robot. vol. 1, no. 4, pp. 319–330, 2009.10.1007/s12369-009-0030-6Search in Google Scholar
[71] P. A. Hancock, D. R. Billings, K. E. Schaefer, J. Y. Chen, E. J. de Visser, and R. Parasuraman, “A meta-analysis of factors affecting trust in human-robot interaction,” Hum. Factors vol. 53, no. 5, pp. 517–527, October 2011.10.1177/0018720811417254Search in Google Scholar PubMed
[72] M. Heerink, V. E. Ben Kröse, and B. Wielinga, “Assessing acceptance of assistive social agent technology by older adults: The almere model,” Int. J. Soc. Robot. vol. 2, no. 4, pp. 361–375, 2010.10.1007/s12369-010-0068-5Search in Google Scholar
[73] M. M. A. de Graaf, S. Ben Allouch; Maartje MA De Graaf and Somaya Ben Allouch, “Exploring influencing variables for the acceptance of social robots,” Robot. Auton. Syst. vol. 61, no. 12, pp. 1476–1486, 2013.10.1016/j.robot.2013.07.007Search in Google Scholar
[74] T. Shibata, K. Wada, Y. Ikeda, and S. Šabanović, “Cross-cultural studies on subjective evaluation of a seal robot,” Adv. Robot. vol. 23, no. 4, pp. 443–458, 2009.10.1163/156855309X408826Search in Google Scholar
© 2020 Samuel Olatunji et al., published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.