Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Can a Robot’s Hand Bias Human Attention? Giulia Scorza Azzarà Joshua Zonca Francesco Rea DIBRIS, University of Genoa RBCS, Italian Institute of Technology Genoa, Italy giulia.scorza@iit.com CONTACT Unit, Italian Institute of Technology Genoa, Italy joshua.zonca@iit.com CONTACT Unit, Italian Institute of Technology Genoa, Italy francesco.rea@iit.com Joo-Hyun Song Alessandra Sciutti Cognitive, Linguistic, and Psychological Sciences, Brown University Providence, Rhode Island, USA joo-hyun_song@brown.edu CONTACT Unit, Italian Institute of Technology Genoa, Italy alessandra.sciutti@iit.com ABSTRACT 1 Previous studies have revealed that humans prioritize attention to the space near their hands (the so-called near-hand effect). This effect may also occur towards a human partner’s hand, but only after sharing a physical joint action. Hence, in human dyads, interaction leads to a shared body representation that may influence basic attentional mechanisms. Our project investigates whether a collaborative interaction with a robot might similarly influence attention. To this aim, we designed an experiment to assess whether the mere presence of a robot with an anthropomorphic hand could bias the human partner’s attention. We replicated a classical psychological paradigm to measure this attentional bias (i.e., the near-hand effect) by adding a robotic condition. Preliminary results found the near-hand effect when performing the task with the self-hand near the screen, leading to shorter reaction times on the same side of the hand. On the contrary, we found no effect on the robot’s hand in the absence of previous collaborative interaction with the robot, in line with studies involving human partners. In everyday activities, we often need to coordinate and synchronize with our partners, whose perceptual and motor abilities might differ from ours. Human beings effectively understand each other’s intentions while interacting in social contexts. Such capability is particularly remarkable if we consider that our visual perception of space is often inaccurate but can suffer from biases (e.g., central tendency [11], [22]), illusions (e.g., rescaling [20], [21]), and similar phenomena affecting human attention [4], [19]. One of the main goals of HRI is to design and employ robots that can interact and collaborate efficiently and naturally with human beings. To this aim, it is crucial to investigate the perceptual, motor, and attentional mechanisms that could support (or hinder) mutual understanding between the parties [16]. Previous studies proved that human attention is prioritized for the space near their hands [1], [15], resulting in shorter reaction times when detecting visual stimuli that appear close to their own hands (the so-called near-hand effect) [14]. Besides, after a collaborative task, this effect also occurs on the human partner’s hand [17]. These findings suggest that collaborative interaction with a human partner influences our shared body representation, biasing our attention toward the partner’s hand almost as if it was ours. Our project aims at investigating under which conditions attentional biases, such as the near-end effect, can also occur in a human-robot interaction context. To achieve this goal, we need to address two questions: CCS CONCEPTS · Human-centered computing → Empirical studies in HCI. KEYWORDS Human attention, Posner cueing task, Near-hand effect ACM Reference Format: Giulia Scorza Azzarà, Joshua Zonca, Francesco Rea, Joo-Hyun Song, and Alessandra Sciutti. 2023. Can a Robot’s Hand Bias Human Attention?. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’23 Companion), March 13ś16, 2023, Stockholm, Sweden. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3568294.3580074 Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). HRI ’23 Companion, March 13ś16, 2023, Stockholm, Sweden © 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-9970-8/23/03. https://doi.org/10.1145/3568294.3580074 213 INTRODUCTION (1) Does the presence of a robot anthropomorphic hand bias human attention following a near-hand effect? (2) Can a collaborative joint physical interaction with the robot lead to the near-hand effect? To answer these research questions, we replicated a well-known psychology paradigm to study the near-hand effect in humans in individual and joint settings [12], using the iCub robot [7] as a controllable stimulus. From a technical point of view, iCub is an optimal choice since its hands have a similar structure and size as human hands and are characterized by many degrees of freedom that guarantee movements similar to human ones. Moreover, it can be programmed to act as a social agent and generate bio-inspired movements and actions [6], supporting compliance in physical interaction. HRI ’23 Companion, March 13ś16, 2023, Stockholm, Sweden Giulia Scorza Azzarà, Joshua Zonca, Francesco Rea, Joo-Hyun Song, & Alessandra Sciutti In this paper, we describe the methodology and results related to the first research question and introduce the plan to address the second research question in our future work. 2 (a) (b) METHODOLOGY We designed an experiment to analyze the near-hand effect of iCub’s hand. More precisely, we aimed firstly at quantifying each participant’s near-hand effect for their own hand and then evaluate whether such effect generalizes to a robotic anthropomorphic hand presented by the iCub robot close to them. The participants performed a well-known attention task called "Posner cueing task" [12] while sitting next to iCub. The Posner cueing task (Figure 1) is a classical paradigm used to study visual attention. There are two empty squares (3.4◦ ) on both sides (7.4◦ ) of a central fixation cross (3.4◦ ). After a random time interval between 1500-3000 ms, one square is cued by increasing the thickness of its borders for 200 ms, and then a target appears (black dot; 2.2◦ ). The participant has to press the space bar on a keyboard as soon as the target appears. If the target appears in the cued square, this is classified as a valid trial; otherwise, this is classified as an invalid trial. Sometimes the square remains cued for 2000 ms, but the square does not appear; these are the catch trials, and they are used to check if the participant is still focused on the task. We used 70% valid trials, 20% invalid trials, and 10% catch trials in random order, as it was done in [17]. (a) (b) Figure 2: Experimental conditions: human’s hand near the screen (a) and robot’s hand near the screen (b). 2.3 Procedure Participants performed the Posner cueing task under three possible conditions: no hand near the screen, self-hand (i.e., human’s hand) near the screen, and robot’s hand near the screen. Each experimental session included four blocks of 60 trials, two with no hand near the screen and two with a hand near the screen (i.e., human’s or robot’s hand), as displayed in Figure 2. The block order was randomized. Each participant did two sessions, one with the self-hand near the screen and the other with the robot’s hand near the screen. We asked participants to repeat the entire experiment with both hands to avoid possible effects depending on the use of the dominant hand (Figure 3). (c) Figure 1: Posner cueing task: valid trial (a), invalid trial (b), and catch trial (c). 3 RESULTS AND DISCUSSION The dependent measure of interest was the participants’ reaction times for target detection. The Reaction Times (RTs) of the responses were filtered between two thresholds: RTs > 200 ms, which is the average physiological threshold for RTs in human beings [8], and RTs < 1000 ms, following the research by Sun & Thomas [17]. 2.1 Participants Twenty-two right-handed people participated in the study (15 females, 7 males; mean age = 26.05 y.o.; std = 5.21 y.o.). All participants had a normal or corrected-to-normal vision and were naive to the purpose of the study. The Regional Ethical Committee approved the experimental protocol for protecting human participants in research, and all participants provided written informed consent before participating in the experiment. (a) (b) 2.2 Apparatus The experiment was programmed in MATLAB using the PsychToolbox extension. All visual stimuli were drawn in black against a light grey background on a monitor with a display resolution of 1024 × 768 pixels. The experimental setup was the same for all conditions. We used a chin rest to maintain the participant at a constant distance from the screen (i.e., 50 cm). The participants’ responses and reaction times were collected through the computer keyboard. When asked to place a hand near the computer screen, participants rested the forearm on support to minimize the discomfort associated with a prolonged extension of the hand and arm during the task. Figure 3: Experimental setup: P = participant, R = robot, and E = experimenter. Each participant did the task with their right hand (a) and left hand (b). 214 Can a Robot’s Hand Bias Human Attention? HRI ’23 Companion, March 13ś16, 2023, Stockholm, Sweden (a) (b) Figure 4: Effect on the human hand. (a) There is a significant difference in RTs between no-hand (green) and human hand (blue) conditions. (b) The effect on the human hand is consistent among all subjects. Almost all the dots lie under the bisector. (a) (b) Figure 5: Effect on the robot’s hand. (a) There is no significant difference in RTs between no-hand (green) and robot’s hand (red) conditions. (b) The effect on the robot’s hand is not consistent among the subjects. Data is distributed along the bisector. We excluded one participant from the analysis for excessive errors in catch trials (i.e., > 55%). The overall error rate of the participants in catch trials was 9.2%. Eventually, 5.5% of trials were discarded because they fell outside the 200ś1000 ms window. We analyzed participants’ reaction times using paired sample t-tests. The result of the first test concerns the Posner cueing task. We found a significant main effect of cue validity (t (21) = 9.03, p < 0.001), proving that participants detected faster the visual targets in valid than invalid trials; this is always verified, regardless of the experimental condition. Anyway, cue validity did not affect the near-hand effect. Afterward, we considered the hand validity variable, which prescribes whether the hand near the screen is on the same side as the appearing target. In particular, we refer to "hand valid" trials when the hand and the target are on the same side, whereas in "hand invalid" trials hand and target are on the opposite side. The bars in Figures 4 and 5 represent the difference between the average RT of "hand valid" and "hand invalid" trials. Results showed the expected near-hand effect when performing the attention task in the human hand condition, leading to significantly shorter reaction times on the same side of the hand near the screen (t (21) = 6.85, p < 0.001). Figure 4a displays the RT delta between the no-hand (green) and the human hand (blue) experimental conditions. The scatter plot of Figure 4b shows the results are very consistent between all the participants. Indeed, almost all the dots, each representing a participant, lie under the bisector, meaning most of the participants’ mean RTs are shorter in the human hand condition than in the no-hand condition. 215 HRI ’23 Companion, March 13ś16, 2023, Stockholm, Sweden Giulia Scorza Azzarà, Joshua Zonca, Francesco Rea, Joo-Hyun Song, & Alessandra Sciutti On the contrary, we found no significant effect on the robot’s hand condition (t (21) = 0.09, p = 0.928). Figure 5a displays the RT delta between the no-hand (green) and the robot’s hand (red) experimental conditions, showing they have similar values in average RT. In this case, data are more distributed across the reference line, as displayed in the scatter plot of Figure 5b, and the mean value lies on the bisector, meaning there is no significant difference between no-hand and robot’s hand conditions. Finally, the last test compares the human and robot conditions. We found a significant difference between the above two conditions (t (21) = -4.51, p < 0.001). The overall difference in reaction times is about 13 ms. This difference highlights the near-hand effect on the human hand and the latter’s absence on the robot’s hand. 4 research and innovation programme. G.A. No 804388, wHiSPER. We also acknowledge the support from the National Science Foundation (NSF) BCS 2043328. REFERENCES [1] Richard A Abrams, Christopher C Davoli, Feng Du, William H Knapp III, and Daniel Paull. 2008. Altered vision near the hands. Cognition 107, 3 (2008), 1035ś1047. [2] Muneeb Imtiaz Ahmad, Omar Mubin, and Joanne Orlando. 2017. A systematic review of adaptivity in human-robot interaction. Multimodal Technologies and Interaction 1, 3 (2017), 14. [3] James R Brockmole, Christopher C Davoli, Richard A Abrams, and Jessica K Witt. 2013. The world within reach: Effects of hand posture and tool use on visual cognition. Current Directions in Psychological Science 22, 1 (2013), 38ś44. [4] Joshua D Cosman and Shaun P Vecera. 2010. Attention affects visual perceptual processing near the hand. Psychological science 21, 9 (2010), 1254ś1258. [5] Christopher C Davoli and Philip Tseng. 2015. Taking a hands-on approach: Current perspectives on the effect of hand position on vision. Frontiers in psychology 6 (2015), 1231. [6] Sarah Degallier, Ludovic Righetti, Lorenzo Natale, Francesco Nori, Giorgio Metta, and Auke Ijspeert. 2008. A modular bio-inspired architecture for movement generation for the infant-like robot iCub. In 2008 2nd IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics. IEEE, 795ś800. [7] Tobias Fischer, Jordi-Ysard Puigbò, Daniel Camilleri, Phuong DH Nguyen, Clément Moulin-Frier, Stéphane Lallée, Giorgio Metta, Tony J Prescott, Yiannis Demiris, and Paul FMJ Verschure. 2018. iCub-HRI: a software framework for complex human-robot interaction scenarios on the iCub humanoid robot. Frontiers in Robotics and AI 5 (2018), 22. [8] Robert J. Kosinski. 2008. A literature review on reaction time. Clemson University 10, 1 (2008), 337ś344. [9] Rebeka Kropivšek Leskovar, Jernej Čamernik, and Tadej Petrič. 2021. LeaderFollower Role Allocation for Physical Collaboration in Human Dyads. Applied Sciences 11, 19 (2021), 8928. [10] Carlo Mazzola, Francesco Rea, and Alessandra Sciutti. 2022. Shared perception is different from individual perception: a new look on context dependency. IEEE Transactions on Cognitive and Developmental Systems (2022). [11] Maria Olkkonen, Patrice F McCarthy, and Sarah R Allred. 2014. The central tendency bias in color perception: Effects of internal and external noise. Journal of vision 14, 11 (2014), 5ś5. [12] Michael I Posner. 1980. Orienting of attention. Quarterly journal of experimental psychology 32, 1 (1980), 3ś25. [13] Francesco Rea, Alessia Vignolo, Alessandra Sciutti, and Nicoletta Noceti. 2019. Human motion understanding for selecting action timing in collaborative humanrobot interaction. Frontiers in Robotics and AI 6 (2019), 58. [14] Catherine L Reed, Ryan Betz, John P Garza, and Ralph J Roberts. 2010. Grab it! Biased attention in functional hand and tool space. Attention, Perception, & Psychophysics 72, 1 (2010), 236ś245. [15] Catherine L Reed, Jefferson D Grubb, and Cleophus Steele. 2006. Hands up: attentional prioritization of space near the hand. Journal of Experimental Psychology: Human Perception and Performance 32, 1 (2006), 166. [16] Alessandra Sciutti, Martina Mara, Vincenzo Tagliasco, and Giulio Sandini. 2018. Humanizing human-robot interaction: On the importance of mutual understanding. IEEE Technology and Society Magazine 37, 1 (2018), 22ś29. [17] Hsin-Mei Sun and Laura E Thomas. 2013. Biased attention near another’s hand following joint action. Frontiers in Psychology 4 (2013), 443. [18] Ana Tanevska, Francesco Rea, Giulio Sandini, Lola Cañamero, and Alessandra Sciutti. 2020. A socially adaptable framework for human-robot interaction. Frontiers in Robotics and AI 7 (2020), 121. [19] Philip Tseng, Bruce Bridgeman, and Chi-Hung Juan. 2012. Take the matter into your own hands: a brief review of the effect of nearby-hands on visual processing. Vision research 72 (2012), 74ś77. [20] Björn Van der Hoort and H Henrik Ehrsson. 2014. Body ownership affects visual perception of object size by rescaling the visual representation of external space. Attention, Perception, & Psychophysics 76, 5 (2014), 1414ś1428. [21] Björn Van Der Hoort and H Henrik Ehrsson. 2016. Illusions of having small or large invisible bodies influence visual perception of object size. Scientific reports 6, 1 (2016), 1ś9. [22] Yang Xiang, Thomas Graeber, Benjamin Enke, and Samuel J Gershman. 2021. Confidence and central tendency in perceptual judgment. Attention, Perception, & Psychophysics 83, 7 (2021), 3024ś3034. CONCLUSION AND FUTURE WORK This project aims at assessing whether collaborative interaction with a humanoid robot can shape basic attentional and perceptual mechanisms in humans, as previously observed in human-human interaction scenarios. The preliminary results of the experiment proved the presence of the near-hand effect on the self-hand through the performance of an attention task, i.e., the Posner cueing task. Moreover, we found no effect caused by the mere presence of an anthropomorphic robot hand. This finding expands the results of research conducted with fake human-like hands or other persons’ hands [3], [5], demonstrating that an anthropomorphic robot hand, as a friend’s human hand, is not per se sufficient to shift human attention toward itself. With the next experiment, we will assess if a physical humanrobot interaction can bias human attention near the robot’s hand. The task will consist of a physical joint action between the human and the iCub, inspired by existing human-human research; then, participants will repeat the Posner task. We have built the collaborative task for the physical human-robot interaction and tested it in a pilot study. We hypothesize to find the near-hand effect after the physical HRI, as it happens between human dyads [17]. The use of the robot will allow us to precisely control and quantitatively assess the dynamics of the interaction to gain a better insight into which features of a joint action might influence the appearance of a "joint" near-hand effect as observed in humanhuman collaborative interaction. Furthermore, it will be possible to assess the role played by the social component of the interaction by manipulating the robot’s behavior to exhibit different levels of social intelligence. That has been shown to impact basic perceptual mechanisms such as human space perception [10]. A final consideration concerns adaptation which is a fundamental ability evident both at the behavioral and physiological levels and still quite an open issue in the field [2], [13], [18]. It would be interesting for this research project future if we enable the robot to adapt its behavior to that of the current interacting partner to allocate the roles of leader-follower naturally as it happens between human dyads [9]. ACKNOWLEDGMENTS This work has been supported by a Starting Grant from the European Research Council (ERC) under the European Union’s H2020 216