Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Francesco Rea

    Francesco Rea

    For most people magicians seem to surpass human abilities, combining skills and deception to perform mesmerizing tricks. Robots performing magic tricks could similarly fascinate and engage the audience, potentially establishing a novel... more
    For most people magicians seem to surpass human abilities, combining skills and deception to perform mesmerizing tricks. Robots performing magic tricks could similarly fascinate and engage the audience, potentially establishing a novel rapport with human partners. However, magician robots are usually done by Wizard of Oz. This study presents an autonomous framework to perform a magic trick in a quick and game-like human-robot interaction. The iCub humanoid robot plays the role of a magician in a card game, autonomously inferring which card the human partner is lying about. We exploited cognitive load assessment via pupil reading to infer the mental state of the player. The validation results show an accuracy of 90.9% and the possibility to simplify the game to improve its portability. This suggests the feasibility of our approach and paves the way toward a real-world application of the game.<br>
    Modern robotics is interested in developing humanoid robots with meta-cognitive capabilities in order to create systems that have the possibility of dealing efficiently with the presence of novel situations and unforeseen inputs. Given... more
    Modern robotics is interested in developing humanoid robots with meta-cognitive capabilities in order to create systems that have the possibility of dealing efficiently with the presence of novel situations and unforeseen inputs. Given the relational nature of human beings, with a glimpse into the future of assistive robots, it seems relevant to start thinking about the nature of the interaction with such robots, increasingly human-like not only from the outside but also in terms of behavior. The question posed in this abstract concerns the possibility of ascribing the robot not only a mind but a more profound dimension: a Self.
    We propose a self-supervised generative model for addressing the perspective translation problem. In particular we focus on third-person to first-person view translation as primary and more common form of perspective translation in human... more
    We propose a self-supervised generative model for addressing the perspective translation problem. In particular we focus on third-person to first-person view translation as primary and more common form of perspective translation in human robot interaction. Evidences show how this skill is developed in children since the very first months of life. In nature, this skill has been also found in many animal species. Endowing robots with perspective translation would be an important contribution to the research fields such as imitation learning and action understanding. We trained our model on simple RGB videos representing actions seen from different perspectives, specifically the first person (ego-vision) and third person (allo-vision). We demonstrate that the learned model generates results that are visually consistent. We also show that our solution automatically learns an embedded representation of the action that can be useful for tasks like action/scene recognition.
    The high request for autonomous and flexible HRI implies the necessity of deploying Machine Learning (ML) mechanisms in the robot control. Indeed, the use of ML techniques, such as Reinforcement Learning (RL), makes the robot behaviour,... more
    The high request for autonomous and flexible HRI implies the necessity of deploying Machine Learning (ML) mechanisms in the robot control. Indeed, the use of ML techniques, such as Reinforcement Learning (RL), makes the robot behaviour, during the learning process, not transparent to the observing user. In this work, we proposed an emotional model to improve the transparency in RL tasks for human-robot collaborative scenarios. The architecture we propose supports the RL algorithm with an emotional model able to both receive human feedback and exhibit emotional responses based on the learning process. The model is entirely based on the Temporal Difference (TD) error. The architecture was tested in an isolated laboratory with a simple setup. The results highlight that showing its internal state through an emotional response is enough to make a robot transparent to its human teacher. People also prefer to interact with a responsive robot because they are used to understand their intent...
    Previous research has shown that the perception that one’s partner is investing effort in a joint action can generate a sense of commitment, leading participants to persist longer despite increasing boredom. The current research extends... more
    Previous research has shown that the perception that one’s partner is investing effort in a joint action can generate a sense of commitment, leading participants to persist longer despite increasing boredom. The current research extends this finding to human-robot interaction. We implemented a 2-player version of the classic snake game which became increasingly boring over the course of each round, and operationalized commitment in terms of how long participants persisted before pressing a ‘finish’ button to conclude each round. Participants were informed that they would be linked via internet with their partner, a humanoid robot. Our results reveal that participants persisted longer when they perceived what they believed to be cues of their robot partner’s effortful contribution to the joint action. This provides evidence that the perception of a robot partner’s effort can elicit a sense of commitment to human-robot interaction.
    Mutual synchronization plays a decisive role in effective collaborations in human joint tasks. Interaction between humans and robots need to show similar emergent coordination. To this aim models of human synchronization have recently... more
    Mutual synchronization plays a decisive role in effective collaborations in human joint tasks. Interaction between humans and robots need to show similar emergent coordination. To this aim models of human synchronization have recently been ported on collaborative robots with success [1]. However, it is also important to consider under which conditions the human partner is willing to adapt to the robot while performing a joint task. The main research goal of this study is to understand whether the temporal adaptation usually observed during human-human interaction occurs also during human-robot cooperation. We present a collaborative joint task engaging both human subjects and the humanoid robot iCub in pursuing an identical common goal: putting blocks into a box. We examine human action timing, evinced from motion capture data, in order to investigate whether humans adapt their behavior to the robot. We compare a quantitative measure of such adaptation with the subjective evaluation extracted from questionnaires. We observe that on average participants tend to adapt to their robotic partner. Nevertheless, by looking at individual behaviors, only few showed a clear adaptation to its timing, despite the vast majority of the subjects reported to have been influenced by the robot. We conclude discussing the potential factors influencing human adaptability, with the suggestion that the speed of execution of the robot is determinant in the coordination.
    Social robots will be soon part of human society, where interactions are sometimes characterized by dishonesty. Hence, they will need to detect lies to better understand humans' behavior, for instance, to assess who is trustworthy and... more
    Social robots will be soon part of human society, where interactions are sometimes characterized by dishonesty. Hence, they will need to detect lies to better understand humans' behavior, for instance, to assess who is trustworthy and provide better support in professions like teaching, caregiving, and law enforcement. In this manuscript, we present ongoing work, started 3-years ago, aimed at enabling the humanoid robot iCub to autonomously detect lies, in real-time, during an informal interaction. Our approach is based only on assessing the human partners' cognitive load, through the measure of their pupil dilation. We show our scientific advancements and provide useful insights to both improve our system and to drive future developments in the field of lie detection in HRI.
    In the future robots will interact more and more with humans and will have to communicate naturally and efficiently. Automatic speech recognition systems (ASR) will play an important role in creating natural interactions and making robots... more
    In the future robots will interact more and more with humans and will have to communicate naturally and efficiently. Automatic speech recognition systems (ASR) will play an important role in creating natural interactions and making robots better companions. Humans excel in speech recognition in noisy environments and are able to filter out noise. Looking at a person's face is one of the mechanisms that humans rely on when it comes to filtering speech in such noisy environments. Having a robot that can look toward a speaker could benefit ASR performance in challenging environments. To this aims, we propose a self-supervised reinforcement learning-based framework inspired by the early development of humans to allow the robot to autonomously create a dataset that is later used to learn to localize speakers with a deep learning network.
    Locating a speaker in the space is a skill that plays an essential role in conducting smooth and natural social interactions. Equipping robots with this ability could lead to more fluid human-robot interaction, also by facilitating voice... more
    Locating a speaker in the space is a skill that plays an essential role in conducting smooth and natural social interactions. Equipping robots with this ability could lead to more fluid human-robot interaction, also by facilitating voice recognition in noisy environments. Most recently proposed sound localisation systems rely on model-based approaches. However, their performances depend on carefully chosen parameters, especially in the binaural and noisy settings typical of humanoids setups. The need for fine-tuning and for adaptation when facing new environments represents a considerable obstacle to the use and portability of such systems in real human-robot interaction scenarios. To overcome these limitations we propose to rely on data-driven approaches (i.e., deep learning) and exploit multi-sensory mechanisms to leverage the direct experience sensed by the robot during an interaction. Taking inspiration from how humans use vision to calibrate their auditory space representation through experiences, we enabled the robot to learn to localize a speaker in a self-supervised way. Our results show that this approach is suitable to learn to localise speakers in the challenging environments typical of human-robot collaboration.
    Robots involved in HRI should be able to adapt to their partners by learning to select autonomously the behaviors that maximize the pleasantness of the interaction for them. To this aim, affect could play two important roles: serve as... more
    Robots involved in HRI should be able to adapt to their partners by learning to select autonomously the behaviors that maximize the pleasantness of the interaction for them. To this aim, affect could play two important roles: serve as perceptual input to infer the emotional status and reactions of the human partner; and act as internal motivation system for the robot, supporting reasoning and action selection. In this perspective, we propose to develop an affect-based architecture for the humanoid robot iCub with the purpose of fully autonomous personalized HRI. This base framework can be generalized to fit many different contexts -social, educational, collaborative and assistive - allowing for natural, long-term, and adaptive interaction.
    An integrated model for the coordination of whole body movements of a humanoid robot with a compliant ankle similar to the human case is described. It includes a synergy formation part, which takes into account the motor redundancy of the... more
    An integrated model for the coordination of whole body movements of a humanoid robot with a compliant ankle similar to the human case is described. It includes a synergy formation part, which takes into account the motor redundancy of the body model, and an intermittent controller, which stabilizes in a robust way postural sway movements, thus combining the hip strategy with ankle strategy.
    Seeing the world through the eyes of a child is always difficult. Designing a robot that might be liked and accepted by young users is therefore particularly complicated. We have investigated children's opinions on which features are... more
    Seeing the world through the eyes of a child is always difficult. Designing a robot that might be liked and accepted by young users is therefore particularly complicated. We have investigated children's opinions on which features are most important in an interactive robot during a popular scientific event where we exhibited the iCub humanoid robot to a mixed public of various ages. From the observation of the participants' reactions to various robot demonstrations and from a dedicated ranking game, we found that children's requirements for a robot companion change sensibly with age. Before 9 years of age children give more relevance to a human-like appearance, while older kids and adults pay more attention to robot action skills. Additionally, the possibility to see and interact with a robot has an impact on children's judgments, especially convincing the youngest to consider also perceptual and motor abilities in a robot, rather than just its shape. These results suggest that robot design needs to take into account the different prior beliefs that children and adults might have when they see a robot with a human-like shape.
    ABSTRACT In this paper, we are presenting a human robot interaction study, which is targeting the question, how the attention system of the iCub robot is performing in comparison with infants. To answer this, we have studied a task... more
    ABSTRACT In this paper, we are presenting a human robot interaction study, which is targeting the question, how the attention system of the iCub robot is performing in comparison with infants. To answer this, we have studied a task presentation, towards infants, the iCub simulator and the iCub robot. We compared the gazing behavior of the recipients. In developmental robotics, as well as in tutoring situations, the gazing behavior of the recipient plays an important role for the following interaction.