The joint controllers used in robots like the Sony Aibo are designed for the task of moving the j... more The joint controllers used in robots like the Sony Aibo are designed for the task of moving the joints of the robot to a given position. However, they are not well suited to the problem of making a robot move through a desired trajectory at speeds close to the physical capabilities of the robot, and in many cases, they cannot be bypassed easily. In this paper, we propose an approach that models both the robot's joints and its built-in controllers as a single system that is in turn controlled by a neural network. The neural network controls the entire ...
This paper considers two approaches to the problem of vision and self-localization on a mobile ro... more This paper considers two approaches to the problem of vision and self-localization on a mobile robot. In the first approach, the perceptual processing is primarily bottom-up, with visual object recognition entirely preceding localization. In the second, significant top-down information is incorporated, with vision and localization being intertwined. That is, the processing of vision is highly dependent on the robot's estimate of its location. The two approaches are implemented and tested on a Sony Aibo ERS-7 robot, localizing as it ...
Despite efforts to design precise motor controllers, robot joints do not always move exactly as d... more Despite efforts to design precise motor controllers, robot joints do not always move exactly as desired. This paper introduces a general model-based method for improving the accuracy of joint control. First, a model that predicts the effects of joint requests is built based on empirical data. Then this model is approximately inverted to determine the control requests that will most closely lead to the desired movements. We implement and validate this approach on a popular, commercially available robot, the Sony Aibo ERS-210A.
Autonomous robots can use a variety of sensors, such as sonar, laser range finders, and bump sens... more Autonomous robots can use a variety of sensors, such as sonar, laser range finders, and bump sensors, to sense their environments. Visual information from an onboard camera can provide particularly rich sensor data. However, processing all the pixels in every image, even with simple operations, can be computationally taxing for robots equipped with cameras of reasonable resolution and frame rate. This paper presents a novel method for a legged robot equipped with a camera to use selective visual attention to efficiently ...
This paper presents a technique for the Simultaneous Calibration of Action and Sensor Models (SCA... more This paper presents a technique for the Simultaneous Calibration of Action and Sensor Models (SCASM) on a mobile robot. While previous approaches to calibration make use of an independent source of feedback, SCASM is unsupervised, in that it does not receive any well-calibrated feedback about its location. Starting with only an inaccurate action model, it learns accurate relative action and sensor models. Furthermore, SCASM is fully autonomous, in that it operates with no human supervision. SCASM is fully ...
a new entry in the ongoing series of RoboCup legged league competitions. The team development beg... more a new entry in the ongoing series of RoboCup legged league competitions. The team development began in mid-January of 2003, at which time none of the team members had any familiarity with the Aibos. Without using any RoboCup-related code from other teams, we entered a team in the American Open competition at the end of April, and met with some success at the annual RoboCup competition that took place in Padova, Italy at the beginning of July. In this paper, we describe aspects of (i) our development process and (ii ...
The UT Austin Villa Four-Legged Team for RoboCup 2004 was a second-time entry in the ongoing seri... more The UT Austin Villa Four-Legged Team for RoboCup 2004 was a second-time entry in the ongoing series of RoboCup legged league competitions. The team development began in mid-January of 2003 without any prior familiarity with the Aibos. After entering a fairly non-competitive team in RoboCup 2003, the team made several important advances. By the July 2004 competition place in Lisbon, Portugal, it was one of the top few teams. In this report, we describe both our development process and the technical details of its end ...
The joint controllers used in robots like the Sony Aibo are designed for the task of moving the j... more The joint controllers used in robots like the Sony Aibo are designed for the task of moving the joints of the robot to a given position. However, they are not well suited to the problem of making a robot move through a desired trajectory at speeds close to the physical capabilities of the robot, and in many cases, they cannot be bypassed easily. In this paper, we propose an approach that models both the robot's joints and its built-in controllers as a single system that is in turn controlled by a neural network. The neural network controls the entire ...
This paper considers two approaches to the problem of vision and self-localization on a mobile ro... more This paper considers two approaches to the problem of vision and self-localization on a mobile robot. In the first approach, the perceptual processing is primarily bottom-up, with visual object recognition entirely preceding localization. In the second, significant top-down information is incorporated, with vision and localization being intertwined. That is, the processing of vision is highly dependent on the robot's estimate of its location. The two approaches are implemented and tested on a Sony Aibo ERS-7 robot, localizing as it ...
Despite efforts to design precise motor controllers, robot joints do not always move exactly as d... more Despite efforts to design precise motor controllers, robot joints do not always move exactly as desired. This paper introduces a general model-based method for improving the accuracy of joint control. First, a model that predicts the effects of joint requests is built based on empirical data. Then this model is approximately inverted to determine the control requests that will most closely lead to the desired movements. We implement and validate this approach on a popular, commercially available robot, the Sony Aibo ERS-210A.
Autonomous robots can use a variety of sensors, such as sonar, laser range finders, and bump sens... more Autonomous robots can use a variety of sensors, such as sonar, laser range finders, and bump sensors, to sense their environments. Visual information from an onboard camera can provide particularly rich sensor data. However, processing all the pixels in every image, even with simple operations, can be computationally taxing for robots equipped with cameras of reasonable resolution and frame rate. This paper presents a novel method for a legged robot equipped with a camera to use selective visual attention to efficiently ...
This paper presents a technique for the Simultaneous Calibration of Action and Sensor Models (SCA... more This paper presents a technique for the Simultaneous Calibration of Action and Sensor Models (SCASM) on a mobile robot. While previous approaches to calibration make use of an independent source of feedback, SCASM is unsupervised, in that it does not receive any well-calibrated feedback about its location. Starting with only an inaccurate action model, it learns accurate relative action and sensor models. Furthermore, SCASM is fully autonomous, in that it operates with no human supervision. SCASM is fully ...
a new entry in the ongoing series of RoboCup legged league competitions. The team development beg... more a new entry in the ongoing series of RoboCup legged league competitions. The team development began in mid-January of 2003, at which time none of the team members had any familiarity with the Aibos. Without using any RoboCup-related code from other teams, we entered a team in the American Open competition at the end of April, and met with some success at the annual RoboCup competition that took place in Padova, Italy at the beginning of July. In this paper, we describe aspects of (i) our development process and (ii ...
The UT Austin Villa Four-Legged Team for RoboCup 2004 was a second-time entry in the ongoing seri... more The UT Austin Villa Four-Legged Team for RoboCup 2004 was a second-time entry in the ongoing series of RoboCup legged league competitions. The team development began in mid-January of 2003 without any prior familiarity with the Aibos. After entering a fairly non-competitive team in RoboCup 2003, the team made several important advances. By the July 2004 competition place in Lisbon, Portugal, it was one of the top few teams. In this report, we describe both our development process and the technical details of its end ...
Uploads
Papers by D. Stronger