The Metrics for Human-Robot Interaction 2008 workshop at the 3rd ACM/IEEE International Conference on Human-Robot Interaction was initiated and organized to further discussion and community progress towards metrics for human-robot... more
The Metrics for Human-Robot Interaction 2008 workshop at the 3rd ACM/IEEE International Conference on Human-Robot Interaction was initiated and organized to further discussion and community progress towards metrics for human-robot interaction (HRI). This report contains the papers presented at the workshop, background information on the workshop itself, and future directions underway within the community.
Abstract—The paper tackles the problem of designing intuitive graphical interfaces for selecting navigational targets for an autonomous robot. Our work focuses on the design and validation of such a flexible interface for an intelligent... more
Abstract—The paper tackles the problem of designing intuitive graphical interfaces for selecting navigational targets for an autonomous robot. Our work focuses on the design and validation of such a flexible interface for an intelligent wheelchair navigating in a large indoor environment. We begin by describing the robot platform and interface design. We then present results from a user study in which participants were required to select navigational targets using a variety of input and filtering methods. We considered two types of input modalities (point-and-click and single-switch), to investigate the effect of constraints on the input mode. We take a particular look at the use of filtering methods to reduce the amount of information presented onscreen and thereby accelerate selection of the correct option. I.
Non-intuitive styles of interaction between humans and mobile robots still constitute a major barrier to the wider application and acceptance of mobile robot technology. More natural interaction can only be achieved if ways are found of... more
Non-intuitive styles of interaction between humans and mobile robots still constitute a major barrier to the wider application and acceptance of mobile robot technology. More natural interaction can only be achieved if ways are found of bridging the gap between the forms of spatial knowledge maintained by such robots and the forms of language used by humans to communicate such knowledge. In this paper, we present the beginnings of a computational model for representing spatial knowledge that is appropriate for interaction between humans and mobile robots. Work on spatial reference in human-human communication has established a range of reference systems adopted when referring to objects; we show the extent to which these strategies transfer to the human-robot situation and touch upon the problem of differing perceptual systems. Our results were obtained within an implemented kernel system which permitted the performance of experiments with human test subjects interacting with the system. We show how the results of the experiments can be used to improve the adequacy and the coverage of the system, and highlight necessary directions for future research.
Conventional force sensors are overdesigned for use in measuring human force inputs, such as is needed in research and application of human/robot interaction. A new type of force sensor is introduced that is suited to human-robot... more
Conventional force sensors are overdesigned for use in measuring human force inputs, such as is needed in research and application of human/robot interaction. A new type of force sensor is introduced that is suited to human-robot interaction. This sensor is based on optoelectronic measurements rather than strain gauges. Criteria for material selection and dimensioning are given, and results for linearity, noise, and drift are reported.
In daily human interactions spatial reasoning occupies an important place. With this ability we can build relations between objects and people, and we can predict the capabilities and the knowledge of the people around us. An interactive... more
In daily human interactions spatial reasoning occupies an important place. With this ability we can build relations between objects and people, and we can predict the capabilities and the knowledge of the people around us. An interactive robot is also expected to have these abilities in order to establish an efficient and natural interaction.
The paper presents a novel control approach for the robot-assisted motion augmentation of disabled subjects during the standing-up manoeuvre. The main goal of the proposal is to integrate the voluntary activity of a person in the control... more
The paper presents a novel control approach for the robot-assisted motion augmentation of disabled subjects during the standing-up manoeuvre. The main goal of the proposal is to integrate the voluntary activity of a person in the control scheme of the rehabilitation robot. The algorithm determines the supportive force to be tracked by a robot force controller. The basic idea behind
We describe a multimodal framework for interacting with an autonomous robotic forklift. A key element enabling effective interaction is a wireless, handheld tablet with which a human supervisor can command the forklift using speech and... more
We describe a multimodal framework for interacting with an autonomous robotic forklift. A key element enabling effective interaction is a wireless, handheld tablet with which a human supervisor can command the forklift using speech and sketch. Most current sketch interfaces treat the canvas as a blank slate. In contrast, our interface uses live and synthesized camera images from the forklift as a canvas, and augments them with object and obstacle information from the world. This connection enables users to "draw on the world," enabling a simpler set of sketched gestures. Our interface supports commands that include summoning the forklift and directing it to lift, transport, and place loads of palletized cargo. We describe an exploratory evaluation of the system designed to identify areas for detailed study.
For intelligent service robots, it is essential to recognize users in order to provide appropriate services to a correctly authenticated user. However, in robot environments in which users freely move around the robot, it is difficult to... more
For intelligent service robots, it is essential to recognize users in order to provide appropriate services to a correctly authenticated user. However, in robot environments in which users freely move around the robot, it is difficult to force users to cooperate for authentication as in traditional biometric security systems. This paper introduces a user authentication system that is designed to recognize users who are unconscious of a robot or of cameras. In the proposed system, biometrics and semi-biometrics are incorporated to cope with the limited applicability of traditional authentication techniques. Semi-biometrics indicates a set of features useful for discriminating persons, but only in the interested group of persons and in the interested frame of time. As a representative semi-biometric feature, body height and color characteristics of clothes are investigated. In particular, a novel method to measure body height with single camera is proposed. In addition, by incorporating tracking functionality, the system can maintain the user status information continuously, which is useful not only for recognition but also for finding a designated person. 1
This paper aims to show the possible and actual synergies between social robotics and sociology. The author argues that social robots are one of the best fields of inquiry to provide a bridge between the two cultures – the one represented... more
This paper aims to show the possible and actual synergies between social robotics and sociology. The author argues that social robots are one of the best fields of inquiry to provide a bridge between the two cultures – the one represented by the social sciences and the humanities on the one hand, and the one represented by the natural sciences and engineering on the other. To achieve this result, quantitative and qualitative analyses are implemented. By using scientometric tools like Ngram Viewer, search engines such as Google Scholar, and hand calculations, the author detects the emergence of the term-and-concept ‘social robots’ in its current use, the absolute and relative frequencies of this term in the scientific literature in the period 1800-2008, the frequency distribution of publications including this term in the period 2000-2019, and the magnitude of publications in which the term ‘social robots’ is associated to the term ‘sociology’ or 'social work'. Finally, employing qualitative analysis and focusing on exemplary cases, this paper shows different ways of implementing researches that relate sociology to robotics, from a theoretical or instrumental point of view. It is argued that sociologists and engineers could work in a team to observe, analyze, and describe the interaction between humans and social robots, by using research techniques and theoretical frames provided by sociology. In turn, this knowledge can be used to build more effective and humanlike social robots.
Sound is essential to enhance visual experience and human robot interaction, but usually most research and development efforts are made mainly towards sound generation, speech synthesis and speech recognition. The reason why only a little... more
Sound is essential to enhance visual experience and human robot interaction, but usually most research and development efforts are made mainly towards sound generation, speech synthesis and speech recognition. The reason why only a little attention has been paid on auditory scene analysis is that real-time perception of a mixture of sounds is difficult. Recently, Nakadai et al have developed real-time auditory and visual multiple-talker tracking technology. In this paper, this technology is applied to human-robot interaction including a receptionist robot and a companion robot at a party. The system includes face identification, speech recognition, focus-of-attention control, and sensorimotor task in tracking multiple talkers. The system is implemented on a upper-torso humanoid and the talker tracking is attained by distributed processing on three nodes connected by 100Base-TX network. The delay of tracking is 200 msec. Focus-of-attention is controlled by associating auditory and visual streams with using the sound source direction and talker position as a clue. Once an association is established, the humanoid keeps its face to the direction of the associated talker.
In this paper we draw a distinction between tele-operation and tele-presence. We argue that the disappearance of mediation, as well as the cognitive and physical integration between human subject and robotic alias are fundamental for the... more
In this paper we draw a distinction between tele-operation and tele-presence. We argue that the disappearance of mediation, as well as the cognitive and physical integration between human subject and robotic alias are fundamental for the realization of telepresence systems. In the experimental parts of this paper, we propose a novel and more suitable approach for developing and designing natural interfaces, based on new knowledge about human's motion detection provided by neurosciences. We validate the hypothesis that anticipatory movements of the head take place also in case of driving robotic artefacts in locomotion tasks. We suggest that the anticipatory movements of the head could be used as a natural interface in order to detect the human's steering intention in advance and with no additional cognitive burden for the person. In the final part, we argue that in order to realize telepresence systems, it is necessary to take into account also the psychological and sociological effects caused by technological mediation. As a matter of fact, moral disengagement and abstraction may determine a loss of presence.
Mobile robots are already applied in factories and hospitals, merely to do a distinct task. It is envisioned that robots assist in households, soon. Those service robots will have to cope with several situations and tasks and of course... more
Mobile robots are already applied in factories and hospitals, merely to do a distinct task. It is envisioned that robots assist in households, soon. Those service robots will have to cope with several situations and tasks and of course with sophisticated human-robot interaction (HRI). Therefore, a robot should not only consider social rules with respect to proxemics, it should detect in which (interaction) situation it is in and act accordingly. With respect to spatial HRI, our goal is to use non-verbal communication, namely implicit body and machine movements to make interaction simpler and smoother. A first study aims to acquire a concept of spatial prompting by a human and by the robot in a "passing by" scenario. The results will be used to enrich the existing system with an appropriate passing behaviour and to distinguish between passage situations and others.
This article describes the user-centred development of play scenarios for robot assisted play, as part of the multidisciplinary IROMEC11The work described in this article is conducted within the EU project IROMEC (Interactive Robotic... more
This article describes the user-centred development of play scenarios for robot assisted play, as part of the multidisciplinary IROMEC11The work described in this article is conducted within the EU project IROMEC (Interactive Robotic Social Mediators as Companions) and was co-funded by the European Commission in the 6th Framework Program under contract IST-FP6-045356. project that develops a novel robotic toy for
For intelligent service robots, it is essential to recognize users in order to provide appropriate services to a correctly authenticated user. However, in robot environments in which users freely move around the robot, it is difficult to... more
For intelligent service robots, it is essential to recognize users in order to provide appropriate services to a correctly authenticated user. However, in robot environments in which users freely move around the robot, it is difficult to force users to cooperate for authentication as in traditional biometric security systems. This paper introduces a user authentication system that is designed to
Książka ukazuje, jak zmieniały się przedstawienia związków miłosnych człowieka i maszyny w filmach science fiction, pochodzących z amerykańskiego i zachodnioeuropejskiego kręgu kulturowego, i jakie przekazy kulturowe na temat... more
Książka ukazuje, jak zmieniały się przedstawienia związków miłosnych człowieka i maszyny w filmach science fiction, pochodzących z amerykańskiego i zachodnioeuropejskiego kręgu kulturowego, i jakie przekazy kulturowe na temat podmiotowości współczesnego człowieka i innych podmiotów otaczającej go rzeczywistości wpisane są w historie miłosne opowiedziane w analizowanych filmach. W interpretacji autorki omawiane filmy stawiają pytania o to, czy maszyny obdarzone sztuczną inteligencją mogłyby wchodzić w miłosne interakcje z człowiekiem i co miałoby to oznaczać dla obu stron w kontekście ich podmiotowości.
The smart house under consideration is a service-integrated complex system to assist older persons and/or people with disabilities. The primary goal of the system is to achieve independent living by various robotic devices and systems.... more
The smart house under consideration is a service-integrated complex system to assist older persons and/or people with disabilities. The primary goal of the system is to achieve independent living by various robotic devices and systems. Such a system is treated as a human-in-the loop system in which humanrobot interaction takes place intensely and frequently. Based on our experiences of having designed and implemented a smart house environment, called Intelligent Sweet Home (ISH), we present a framework of realizing human-friendly HRI (human-robot interaction) module with various effective techniques of computational intelligence. More specifically, we partition the robotic tasks of HRI module into three groups in consideration of the level of specificity, fuzziness or uncertainty of the context of the system, and present effective interaction method for each case. We first show a task planning algorithm and its architecture to deal with well-structured tasks autonomously by a simplified set of commands of the user instead of inconvenient manual operations. To provide with capability of interacting in a human-friendly way in a fuzzy context, it is proposed that the robot should make use of human bio-signals as input of the HRI module as shown in a hand gesture recognition system, called a soft remote control system. Finally we discuss a probabilistic fuzzy rule-based life-long learning system, equipped with intention reading capability by learning human behavioral patterns, which is introduced as a solution in uncertain and time-varying situations.
Robotic operations carried out via the Internet face several challenges and difficulties. These range from human-computer interfacing and human-robot interaction to overcoming random time delay and task synchronization. These limitations... more
Robotic operations carried out via the Internet face several challenges and difficulties. These range from human-computer interfacing and human-robot interaction to overcoming random time delay and task synchronization. These limitations are intensified when multi-operators at multisites are collaboratively teleoperating multirobots to achieve a certain task. In this paper, a new modeling and control method for Internet-based cooperative teleoperation is developed. Combining Petri net model and event-based ...
Multimodal natural behavior of humans presents a complex yet highly coordinated set of interacting processes. Providing robots with such interactive skills is a challenging and worthy goal and numerous such efforts are currently underway;... more
Multimodal natural behavior of humans presents a complex yet highly coordinated set of interacting processes. Providing robots with such interactive skills is a challenging and worthy goal and numerous such efforts are currently underway; evaluating the progress in this direction, however, continues to be a challenge. General methods for measuring the performance of artificially intelligent systems would be of great benefit to the research community. In this paper 1 we describe an approach to evaluating human-robot multimodal natural behavior. The approach is based on a detailed scoring and spatio-temporal analysis of the structure and patterning of live behavior, at multiple temporal scales, down to the decisecond level. The approach is tested in a case study involving an early virtual robot prototype, Gandalf, which is capable of real-time verbal and non-verbal interaction with people. Our analysis includes a comparison to a comparable human-human dyadic interaction scenario. Our main objective is to develop a methodology for comparing the quality and effectiveness of human-robot interaction between a wide variety of such systems. Early results indicate that our approach holds significant promise as a future methodology for evaluating complex systems that have a natural counterpart.
This experiment explored the influence of users' experience (prior interaction) with robots on their attitudes and trust toward robotic agents. Specifically, we hypothesized that prior experience would lead to 1) higher trust scores after... more
This experiment explored the influence of users' experience (prior interaction) with robots on their attitudes and trust toward robotic agents. Specifically, we hypothesized that prior experience would lead to 1) higher trust scores after viewing a robot complete a task, 2) smaller differences in trust scores when comparing a human and a robot completing the same task, and 3) more positive general attitudes towards robots. These hypotheses were supported although not all results achieved significant levels of differentiation. These findings confirm that prior experience plays an important role in both user trust and general attitude in human-robot interactions.
In Human-Robot Interaction domain many solutions have been proposed in order to achieve a smooth and natural communication and interaction: speech and other kinds of language, as well as others human typical ways of communication, such as... more
In Human-Robot Interaction domain many solutions have been proposed in order to achieve a smooth and natural communication and interaction: speech and other kinds of language, as well as others human typical ways of communication, such as gestures and facial expressions. We argue that a very necessary and effective way of improving HRI, at least from a theoretical point of view, could be purposive mutual understanding, ie practical behaviors aimed to achieve a pragmatic goal as well as a communicative one, without any ...
Soziale Roboter unterscheiden sich von Servicerobotern, da sie auch komplexere Interaktionen und Kommunikation beherrschen. Einige können Emotionen simulieren oder sogar erkennen. Einsatzbereiche gibt es viele: vom Haushalt über die... more
Soziale Roboter unterscheiden sich von Servicerobotern, da sie auch komplexere Interaktionen und Kommunikation beherrschen. Einige können Emotionen simulieren oder sogar erkennen. Einsatzbereiche gibt es viele: vom Haushalt über die Pflege bis in den medizinischen Bereich. Wo liegen die Grenzen der aktuellen Systeme? Wie müssen soziale Roboter aussehen und interagieren, um als nützliche Helfer statt als Konkurrenten wahrgenommen zu werden? Dieser Artikel gibt einen kurzen Überblick bestehender sozialer Roboter. Er beleuchtet deren Akzeptanz im wichtigen Bereich Gesundheit und Pflege anhand der Ergebnisse einer Expertenstudie und gibt eine zeitliche Perspektive zur weiteren Entwicklung.
Robots must be cognizant of how their actions will be interpreted in context. Actions performed in the context of a joint activity comprise two aspects: functional and communicative. The functional component achieves the goal of the... more
Robots must be cognizant of how their actions will be interpreted in context. Actions performed in the context of a joint activity comprise two aspects: functional and communicative. The functional component achieves the goal of the action, whereas its communicative component, when present, expresses some information to the actor's partners in the joint activity. The interpretation of such communication requires leveraging information that is public to all participants, known as common ground. Much of human communication is performed through this implicit mechanism, and humans cannot help but infer some meaningwhether or not it was intended by the actor-from most actions. We present a framework for robots to utilize this communicative channel on top of normal functional actions to work more effectively with human partners. We consider the role of the actor and the observer, both individually and jointly, in implicit communication, as well as the effects of timing. We also show how the framework maps onto various modes of action, including natural language and motion. We consider these modes of action in various human-robot interaction domains, including social navigation and collaborative assembly.
The notion of trust has evolved to become one of the many nebulous buzzwords centering around AI. This paper aims at showing that, with regards to human influenceability through AI, trust in AI is to be seen as problematic. Based on the... more
The notion of trust has evolved to become one of the many nebulous buzzwords centering around AI. This paper aims at showing that, with regards to human influenceability through AI, trust in AI is to be seen as problematic. Based on the notion of socio-technical epistemic systems, I will argue that the trust human agents have in AI is strongly related to what could be understood as algorithmic authority. The second part of this paper will then translate the elaborated line of argument to the field of social robotics.
In this paper we present Robox, a mobile robot designed for operation in a mass exhibition and the experience we made with its installation at the Swiss National Exhibition Expo.02. Robox is a fully autonomous mobile platform with unique... more
In this paper we present Robox, a mobile robot designed for operation in a mass exhibition and the experience we made with its installation at the Swiss National Exhibition Expo.02. Robox is a fully autonomous mobile platform with unique multi-modal interaction capabilities, a novel approach to global localization using multiple Gaussian hypotheses, and a powerful obstacle avoidance. Eleven Robox ran for 12 hours daily from May 15 to October 20, 2002, traveling more than 3315 km and interacting with 686,000 visitors.
In proposing an ontology of motion capture, this paper identifies three modalities — capture, hold, release — to conceptualise the peculiar affordances of motion capture technology in its relationship to a performer’s movement. Motion... more
In proposing an ontology of motion capture, this paper identifies three modalities — capture, hold, release — to conceptualise the peculiar affordances of motion capture technology in its relationship to a performer’s movement. Motion capture is unique among contemporary moving image media in its capacity to re-perform a performer’s recorded movement a potentially limitless number of times, e.g. as applied to innumerable different CG characters. Unlike live-action film or even rotoscoping (motion capture’s closest equivalent), the movement extracted from the captured performance lives on, but only by way of the inimagable (non-visible) domain of motion data. Motion data ‘holds’ movement itself in inimagable form, and ‘releases’ it in the domain of the digital moving image. This tri-fold conception relates an important dimension of (Heideggerian) Being to the idea of movement as fundamental to an ontology or ‘being’ of motion capture. At the same time, the proposed ontology challenge...
A matter of course for the researchers and developers of state-of-the-art technology for human-computer-or human-robot-interaction is to create not only systems that can precisely fulfill a certain task. They must provide a strong... more
A matter of course for the researchers and developers of state-of-the-art technology for human-computer-or human-robot-interaction is to create not only systems that can precisely fulfill a certain task. They must provide a strong robustness against internal and external errors or user-dependent application errors. Especially when creating service robots for a variety of applications or robots for accompanying humans in everyday situations sufficient error robustness is crucial for acceptance by users. But experience unveils that operating such systems under real world conditions with unexperienced users is an extremely challenging task which still is not solved satisfying. In this paper we will present an approach for handling both internal errors and application errors within an integrated system capable of performing extended HRI on different robotic platforms and in unspecified surroundings like a real world apartment. Based on the gathered experience from user studies and evaluating integrated systems in the real world, we implemented several ways to generalize and handle unexpected situations. Adding such a kind of error awareness to HRI systems in cooperation with the interaction partner avoids to get stuck in an unexpected situation or state and handle mode confusion. Instead of shouldering the enormous effort to account for all possible problems, this paper proposes a more general solution and underpins this with findings from naive user studies. This enhancement is crucial for the development of a new generation of robots as despite diligent preparations might be made, no one can predict how an interaction with a robotic system will develop and which kind of environment it has to cope with.
Humans can infer, just from the observation of others' actions, several information on the actor's intents and on the properties of the manipulated objects. This intuitive understanding is very efficient and allows two collaborating... more
Humans can infer, just from the observation of others' actions, several information on the actor's intents and on the properties of the manipulated objects. This intuitive understanding is very efficient and allows two collaborating partners to be prepared to handle the common tools, as they can estimate the weight of the object the other agent is passing to them even before the hand-over is concluded. Transferring this kind of mutual understanding to human -robot interactions would be particularly beneficial, as it would improve the fluidity of any collaborative task. The question that we address in this study is therefore under which conditions humans can estimate the weight lifted by a humanoid robot and whether the acquisition of this skill requires an extensive learning by the human subject. Moreover, we assess whether reading humanoid lifting actions implies the involvement of the observer's motor system, as it happens for weight judgment from the observation of human actions. Our results indicate that with a proper design of the humanoid lifting motions, human subjects are able to estimate the weight lifted by the humanoid robot with a similar accuracy as that exhibited during human observation. Furthermore, such ability is intuitive and does not require learning or training. Lastly, weight judgment seems to be dependent on the involvement of the observer's motor system both during human and humanoid observation. These findings suggest that the neural mechanisms at the basis of human interaction can be extended to human-humanoid interaction, allowing for intuitive and proficient collaboration between humanoid robots and untrained human partners.
Being aware of the presence, activities and is fundamental for Human–Robot Interaction and assistive applications. In this paper, we describe (1) designing triadic situations for cognitive stimulation for elderly users; (2) characterizing... more
Being aware of the presence, activities and is fundamental for Human–Robot Interaction and assistive applications. In this paper, we describe (1) designing triadic situations for cognitive stimulation for elderly users; (2) characterizing social signals that describe social context: system directed speech (SDS) and self-talk (ST); and (3) estimating an interaction efficiency measure that reveals the quality of interaction. The proposed triadic situation is formed by a user, a computer providing cognitive exercises and a robot that provides encouragement and help using verbal and non-verbal signals. The methodology followed to design this situation is presented. Wizard-of-Oz experiments have been performed and analyzed through eyecontact behaviors and dialog acts (SDS and ST). We show that users employ two interaction styles characterized by different prosody features. Automatic recognition systems of these dialog acts is proposed using k-NN, decision tree and SVM classifiers trained with pitch, energy and rhythmicbased features. The best recognition system achieves an accuracy of 71 %, showing that interaction styles can be discriminated on the basis of prosodic features. An Interaction Efficiency (IE) metric is proposed to characterize interaction styles. This metric exploits on-view/off-view discrimination, semantic analysis and ST/SDS discrimination. Experiments on collected data prove the effectiveness of the IE measure in evaluating the individual’s quality of interaction of elderly patients during the cognitive stimulation task.
We present an effective and fast method for static hand gesture recognition. This method is based on classifying the different gestures according to geometric-based invariants which are obtained from image data after segmentation; thus,... more
We present an effective and fast method for static hand gesture recognition. This method is based on classifying the different gestures according to geometric-based invariants which are obtained from image data after segmentation; thus, unlike many other recognition methods, this method is not dependent on skin color. Gestures are extracted from each frame of the video, with a static background. The segmentation is done by dynamic extraction of background pixels according to the histogram of each image. Gestures are classified using a weighted K-Nearest Neighbors Algorithm which is combined with a naive Bayes approach to estimate the probability of each gesture type.
This paper presents a framework for multimodal human-robot interaction. The proposed framework is being implemented in a personal robot called Maggie, developed at RoboticsLab of the University Carlos III of Madrid for social interaction... more
This paper presents a framework for multimodal human-robot interaction. The proposed framework is being implemented in a personal robot called Maggie, developed at RoboticsLab of the University Carlos III of Madrid for social interaction research. The control architecture of this personal robot is a hybrid control architecture called AD (automatic-deliberative) that incorporates an emotion control system (ECS). Maggie's main goal is to interact establish a peer-to-peer relationship with humans. To achieve this goal, a set of human-robot interaction skills are developed based on the proposed framework. The human-robot interaction skills imply tactile, visual, remote voice and sound modes. The multi-modal fusion and synchronization are also presented in this paper
This paper presents a strategy for ensuring safety during human-robot interaction in real time. A measure of danger during the interaction is explicitly computed, based on factors affecting the impact force during a potential collision... more
This paper presents a strategy for ensuring safety during human-robot interaction in real time. A measure of danger during the interaction is explicitly computed, based on factors affecting the impact force during a potential collision between the human and the robot. This danger index is then used as an input to real-time trajectory generation when the index exceeds a predefined threshold. The danger index is formulated to produce a provably stable, fast response in the presence of multiple surrounding obstacles. A motion strategy to minimize the danger index is developed for articulated multi-degree-of-freedom robots. Simulations and experiments show the efficacy of this approach.
In this paper we present Robox, a mobile robot designed for operation in a mass exhibition and the experience we made with its installation at the Swiss National Exhibition Expo.02. Robox is a fully autonomous mobile platform with unique... more
In this paper we present Robox, a mobile robot designed for operation in a mass exhibition and the experience we made with its installation at the Swiss National Exhibition Expo.02. Robox is a fully autonomous mobile platform with unique multi-modal interaction capabilities, a novel approach to global localization using multiple Gaussian hypotheses, and a powerful obstacle avoidance. Eleven Robox ran for 12 hours daily from May 15 to October 20, 2002, traveling more than 3315 km and interacting with 686,000 visitors.
It is generally recognized that non perceptual factors like age, gender, education and computer experience can have a moderating effect on how perception of a technology leads to acceptance of it. In our present research we are exploring... more
It is generally recognized that non perceptual factors like age, gender, education and computer experience can have a moderating effect on how perception of a technology leads to acceptance of it. In our present research we are exploring the influence of these factors on the acceptance of assistive social robots by older adults. In this short paper we discuss the results of a user study in which a movie of an elderly person using a social assistive robot was shown to older adults. The analysis of the responses give a first indication on if and how these factors relate to the perceptual processes that lead to acceptance.
Human-robot interaction (HRI) for mobile robots is still in its infancy. Most user interactions with robots have been limited to tele-operation capabilities where the most common interface provided to the user has been the video feed from... more
Human-robot interaction (HRI) for mobile robots is still in its infancy. Most user interactions with robots have been limited to tele-operation capabilities where the most common interface provided to the user has been the video feed from the robotic platform ...
Social robots are those endowed with communication channels and abilities that take inspiration from human beings. The scope of such abilities should include those allowing a robot to understand people’s affective states and expressions,... more
Social robots are those endowed with communication channels and abilities that take inspiration from human beings. The scope of such abilities should include those allowing a robot to understand people’s affective states and expressions, intentions, actions, and to interpret them based on contextual information. Childcare robots are an example of robots that could take advantage of the integration of these capabilities. This commentary conducts a technical appraisal of the notion of autonomous childcare robots, focusing on these social perceptive capabilities, reviewing some of the key challenges remaining to be investigated by the research community in this respect.
Most telerobotic applications rely on a Human-Robot Interface that requires the operator to continuously monitor the state of the robot through visual feedback while uses manual input devices to send commands to control the navigation of... more
Most telerobotic applications rely on a Human-Robot Interface that requires the operator to continuously monitor the state of the robot through visual feedback while uses manual input devices to send commands to control the navigation of the robot. Although this setup is present in many examples of telerobotic applications, it may not be suitable in situations when it is not possible or desirable to have manual input devices, or when the operator has a motor disability that does not allow the use of that type of input devices. Since the operator already uses his/her eyes in the monitoring task, an interface based on the inputs from their gaze could be used to teleoperate the robot. This paper presents a telerobotic platform that uses a user interface based on eye-gaze tracking that enables a user to control the navigation of a teleoperated mobile robot using only his/her eyes as inputs to the system. Details of the operation of the eye-gaze tracking system and the results of a task-oriented evaluation of the developed system are also included.
Where a licence is displayed above, please note the terms and conditions of the licence govern your use of this document. When citing, please reference the published version. Take down policy While the University of Birmingham exercises... more
Where a licence is displayed above, please note the terms and conditions of the licence govern your use of this document. When citing, please reference the published version. Take down policy While the University of Birmingham exercises care and attention in making items available there are rare occasions when an item has been uploaded in error or has been deemed to be commercially or otherwise sensitive.
Face to face communication is a real-time process operating at a a time scale in the order of 40 milliseconds. The level of uncertainty at this time scale is considerable, making it necessary for humans and machines to rely on sensory... more
Face to face communication is a real-time process operating at a a time scale in the order of 40 milliseconds. The level of uncertainty at this time scale is considerable, making it necessary for humans and machines to rely on sensory rich perceptual primitives rather than slow symbolic inference processes. In this paper we present progress on one such perceptual primitive. The system automatically detects frontal faces in the video stream and codes them with respect to 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. The face finder employs a cascade of feature detectors trained with boosting techniques . The expression recognizer receives image patches located by the face detector. A Gabor representation of the patch is formed and then processed by a bank of SVM classifiers. A novel combination of Adaboost and SVM's enhances performance. The system was tested on the Cohn-Kanade dataset of posed facial expressions . The generalization performance to new subjects for a 7way forced choice correct. Most interestingly the outputs of the classifier change smoothly as a function of time, providing a potentially valuable representation to code facial expression dynamics in a fully automatic and unobtrusive manner. The system has been deployed on a wide variety of platforms including Sony's Aibo pet robot, ATR's RoboVie, and CU animator, and is currently being evaluated for applications including automatic reading tutors, assessment of human-robot interaction.
The capability of collaboration is critical in the design of symbiotic cognitive systems. To obtain this functional capability, a cognitive system should possess evaluative and communicative processes. Emotions and their underlying... more
The capability of collaboration is critical in the design of symbiotic cognitive systems. To obtain this functional capability, a cognitive system should possess evaluative and communicative processes. Emotions and their underlying processes provide such functions in social and collaborative environments.We investigate the mutual influence of affective and collaboration processes in a cognitive theory to support the interaction between humans and robots or virtual agents. We have developed new algorithms for these processes, as well as a new overall computational model for implementing collaborative robots and agents. We build primarily on the cognitive appraisal theory of emotions and the SharedPlans theory of collaboration to investigate the structure, fundamental processes and functions of emotions in a collaboration context.
In this paper we describe a system that enables a mobile robot equipped with a color vision system to track humans in indoor environments. We developed a method for tracking humans when they are within the field of view of the camera,... more
In this paper we describe a system that enables a mobile robot equipped with a color vision system to track humans in indoor environments. We developed a method for tracking humans when they are within the field of view of the camera, based on motion and color cues. However, the robot also has to keep track of humans which leave the field of view and re-enter later. We developed a dynamic Bayesian network for such a global tracking task. Experimental results on real data confirm the viability of the developed method.
The human robot interaction community is multidisciplinary by nature and has members from social science to engineering backgrounds. In this paper we aim to provide human robot developers with a straightforward toolkit to evaluate users'... more
The human robot interaction community is multidisciplinary by nature and has members from social science to engineering backgrounds. In this paper we aim to provide human robot developers with a straightforward toolkit to evaluate users' acceptance of assistive social robots they are designing or developing for elderly care environments. We will explain how we developed the measures for this analysis, provide do's and don'ts in designing the experiments, demonstrate the application of the measures we have developed for this purpose and the analysis and interpretation of the data. As such we hope to engage human robot interaction developers in evaluating the acceptability of their own robot to inform the development process and improve the final robot's design. SI Use PEOU FC PU ANX PENJ ATT Trust ITU PAD PS SP