Articles
RoboCup-2001
The Fifth Robotic Soccer
World Championships
Manuela Veloso, Tucker Balch, Peter Stone, Hiroaki Kitano,
Fuminori Yamasaki, Ken Endo, Minoru Asada, M. Jamzad,
B. S. Sadjad, V. S. Mirrokni, M. Kazemi, H. Chitsaz,
A. Heydarnoori, M. T. Hajiaghai, and E. Chiniforooshan
■ RoboCup-2001 was the Fifth International
RoboCup Competition and Conference. It was
held for the first time in the United States, following RoboCup-2000 in Melbourne, Australia;
RoboCup-99 in Stockholm; RoboCup-98 in Paris;
and RoboCup-97 in Osaka. This article discusses in
detail each one of the events at RoboCup-2001,
focusing on the competition leagues.
R
oboCup-2001 was the Fifth International
RoboCup Competition and Conference
(figure 1). It was held for the first time in
the United States, following RoboCup-2000 in
Melbourne, Australia; RoboCup-99 in Stockholm; RoboCup-98 in Paris; and RoboCup-97
in Osaka. RoboCup is a research-oriented initiative that pioneered the field of multirobot
research of robot teams starting in 1996. In
those days, most of the robotics research was
focused on single-robot issues. RoboCup
opened a new horizon for multirobot research:
Teams of robots need to face other teams of
robots to accomplish specific goals. This challenging objective offers a broad and rich set of
research and development questions, to wit the
construction of mechanically sound and robust
robots, real-time effective perception algorithms, and dynamic behavior-based approaches to support teamwork.
RoboCup has truly been a research-oriented
endeavor. Every year, the RoboCup researchers
analyze the progress of the research and
extend the competitions and demonstrations
in the different leagues to create new challenges. The ultimate goal of RoboCup is to
reach a point where teams of robots can successfully compete with human players. The
RoboCup events move toward this goal.
This article discusses in detail each one of
the events at RoboCup-2001, focusing on the
competition leagues. As an overview of the
complete RoboCup-2001 (table 1 lists all the
teams), and as an introduction to this article,
we first provide a short description of the
RoboCup-2001 events. The general chair of
RoboCup-2001 was Manuela Veloso. The associate chairs in charge of robotic and simulation
events, respectively, were Tucker Balch and
Peter Stone.
International symposium: This was a twoday international symposium with presentations of technical papers addressing AI and
robotics research of relevance to RoboCup.
Twenty papers and 42 posters were successfully
presented in perception and multiagent behaviors. The proceedings will be published by
Springer and are edited by program chairs
Andreas Birk, Silvia Coradeschi, and Satoshi
Tadokoro.
Copyright © 2002, American Association for Artificial Intelligence. All rights reserved. 0738-4602-2002 / $2.00
SPRING 2002
55
Articles
Figure 1. The RoboCup-2001 Participating People and Robots.
Two simulation leagues: These are the soccer simulator and the simulation rescue. The
soccer simulator competition consisted of
teams of 11 fully distributed software agents.
The framework consists of a server that simulates the game and changes the world according to the actions that the players want to execute. The RoboCup Simulation Rescue
competition, with teams of fully distributed
software agents, provided a disaster scenario in
which teams with different capabilities, for
example, firefighters, police crews, and medical
teams, needed to conduct search and rescue for
victims of a disaster. This event was held for
the first time at RoboCup-2001.
RoboCup junior outreach: The RoboCup
junior event hosts children 8 to 18 years of age
interested in robotic soccer. The competitions
and demonstrations include two on two soccer
and robot dancing.
Four robot leagues: These leagues are the
small-size robot, the middle-size robot, the
Sony legged robot, and the robot rescue. The
small-size robot competition consisted of
teams of as many as five robotic agents of
restricted dimensions, approximately 15 centimeters.3 Off-board vision and computer
remote control were allowed. The middle-size
robot competition consisted of teams of as
many as four robotic agents of restricted
dimensions and a surface of approximately 50
56
AI MAGAZINE
centimeters2 in which robots needed to have
full on-board autonomy (table 4). The Sony
legged-robot league consisted of teams of three
fully autonomous Sony robots. Sixteen teams
participated with Sony four-legged robots. The
robot rescue competition was jointly held by
RoboCup and the American Association for
Artificial Intelligence (AAAI). It was held for
the first time as part of RoboCup, and it consisted of a three-story disaster scenario provided by the National Institute of Standards and
Technology (NIST), where robots navigate
through debris to search for victims.
Humanoid robot demonstration: RoboCup-2001, jointly with AAAI, held a demonstration of a humanoid robot. We are planning
the first humanoid game for RoboCup-2002.
RoboCup-2001 proved to be a truly significant
contribution to the fields of AI and robotics
and the subareas of multiagent and multirobot
systems.
Robotics Leagues
Robots competed in four leagues at RoboCup2001: (1) the small-size league, (2) the middlesize league, (3) the Sony legged-robot league,
and (4) robot rescue. The small-size league,
chaired by Raul Rojas, involves teams of five
robots that play on a field about the size of a
table tennis court with an orange golf ball. The
Articles
Figure 2. Two Views of the RoboCup-2001 Middle-Size League.
SPRING 2002 57
Articles
robots are limited in size to at most 18 centimeters in diameter. One of the key distinctions between the small-size league and the
other leagues is that teams in the small-size
league are allowed to place cameras over the
field to determine the locations of robotic players and the ball. In most cases, teams feed the
output of the overhead camera into a central
computer that determines movement commands that are transmitted over wireless links
to the robots. However, many researchers are
interested in the challenge of developing
small-size robots with onboard sensing only;
the number of teams in this category has been
growing each year.
The small-size field has evolved substantially
in the last few years. Originally, the field was
defined as a ping-pong table surrounded by 10centimeter-tall vertical walls. However, it was
felt that more “finesse” would be achieved in
ball handling if the walls were angled; so, in
2000 the walls were set at a 45-degree angle and
shortened to 5 centimeters, where the ball is
likely to roll out of bounds if it is not handled
carefully. Another evolution toward more realistic play was the addition of “artificial turf” on
the field (actually a short green carpet).
The year 2001 marked the first time that
more teams wanted to attend than could be
accommodated at the competition. Space and
time limited the organizers to approximately
20 teams in each league. Teams were required
to submit technical descriptions and videotapes of their teams to qualify. In the case of
the small-size league, 22 teams were invited,
and 20 eventually made the trip to Seattle.
The competition was conducted as follows:
Teams were divided into four groups of five
teams each. The composition of the groups
depended on a number of factors, including
past performance and country-continent of
origin. Within each group, a full round-robin
competition was held (each team played every
other team in the group). At the end of the
round-robin phase, the top two teams in each
group were allowed to proceed to the playoffs.
The small-size teams that reached the playoffs
were FU-FIGHTERS, LUCKY STAR II, KU-BOXES, ROGI
TEAM, CORNELL BIG RED, 5DPO, ROBOROOS, and the
FIELD RANGERS. Quarter finals, semifinals, and
finals were held in a single elimination tournament, with an additional match to determine
third place. The top finishers were (1) LUCKY
STAR II, (2) FIELD RANGERS, and (3) CORNELL BIG RED.
Middle-size–league teams play on carpeted
fields 5 meters wide by 9 meters long (figure 2).
The robots are limited to 50 centimeters in
diameter. Unlike the small-size league, no
external sensing is allowed, and all sensors
58
AI MAGAZINE
must be on board the robots themselves. Teams
are composed of as many as four robots. An
orange FIFA (International Soccer Association)
size-5 ball is used.
Eighteen teams participated in the middlesize league at RoboCup-2001, which was
chaired by Pedro Lima. Three groups of six
teams each competed in round-robin matches,
with the best eight teams proceeding to playoff
games. The top three finishers in the middlesize league were (1) CS FREIBURG, (2) TRACKIES, and
(3) EIGEN.
The Sony legged robots compete on a 3meter by 5-meter carpeted field. Six colored
landmarks are placed around the field to help
the robots determine their location. A small
plastic orange ball is used for scoring. Like the
middle-size league, the Sony legged robots are
limited to on-board sensing (including a color
camera). All teams must use identical robots
provided by Sony. In 2001, teams were composed of three robots each; in 2002, the teams
will include four robots. The Sony legged-robot
soccer league has been expanding each year to
include new teams. RoboCup-2001 included
16 teams from around the world.
The Sony legged league was chaired by
Masahiro Fujita. As in the other robot leagues,
the competition was conducted in round-robin
and playoff stages. For the round robin, teams
were organized into four groups of four teams.
Eight teams progressed to the playoffs. The top
three finishers in the Sony legged-robot league
were (1) UNSW UNITED’01, (2) CM-PACK’01, and (3)
UPENNALIZERS’01.
The year 2001 marked the first year
RoboCup included a robot rescue event (figure
3). The event was jointly organized by
RoboCup and AAAI and chaired by Holly Yanco. In this competition, robots explored a simulated postearthquake environment for
trapped or injured human victims. Seven
teams participated in this event. No team did
well enough to place, but two technical awards
were given. Swarthmore College was given the
Technical Award for Artificial Intelligence for
Rescue, and Sharif University received the
Technical Award for Advanced Mobility for
Rescue. We expect this league to grow substantially in the next few years. The robot rescue
competition is described in more detail in the
companion articles in this issue.
Simulation Leagues
RoboCup-2001 featured the fifth RoboCup soccer simulation competition and introduced the
first RoboCup rescue simulation competition.
Both simulation platforms aim to capture
Articles
Figure 3. The RoboCup Rescue Robot League.
many of the challenges of the robotic leagues,
without requiring participants to build physical robots. Like in the real world, simulator
agents must deal with large amounts of uncertainty and both perceptual and actuator noise.
Although the challenges of computer vision
and mechanical design are abstracted away,
simulator teams consist of greater numbers of
agents than do their robotic counterparts and,
thus, must address more large-scale multiagent
issues. The ability to execute many more test
runs in simulation than is possible with real
robots also enables a larger range of possible
approaches to agent control, including learning-based methods.
Soccer Simulation
The soccer simulator competition, chaired
this year by Gal Kaminka, continues to be the
most popular RoboCup event from the perspective of the number of entrants (figure 4).
More than 50 teams met the qualification
requirements, 42 of which actually entered
the competition. The RoboCup soccer simula-
tor (Noda et al. 1998) is an evolving research
platform that has been used as the basis for
successful international competitions and
research challenges (Kitano et al. 1997). It is a
fully distributed, multiagent domain with
both teammates and adversaries. There is hidden state, meaning that each agent has only a
partial world view at any given moment. The
agents also have noisy sensors and actuators,
meaning that they do not perceive the world
exactly as it is, nor can they affect the world
exactly as intended. In addition, the perception and action cycles are asynchronous, prohibiting the traditional AI paradigm of using
perceptual input to trigger actions. Communication opportunities are limited, and the
agents must make their decisions in real time.
These domain characteristics combine to
make simulated robotic soccer a realistic and
challenging domain. Each year, small changes
are made to the simulator both to introduce
new research challenges and to “level the
playing field” for new teams. This year, the
biggest changes were the introduction of het-
SPRING 2002 59
Articles
RoboCup-2001 Scientific Challenge Award
Energy-Efficient Walking for a Low-Cost
Humanoid Robot, PINO
Figure A. PINO.
Left: Whole view. Right: Mechanism.
The RoboCup humanoid league, which is
scheduled to start in 2002, is one of the
most attractive research targets. We believe
that the success of the humanoid league is
critical for the future of RoboCup and will
have major implications in robotics
research and industry. Building humanoid
robots that compete at RoboCup requires
sophistication in various aspects, including
mechanical design, control, and high-level
cognition.
PINO is a low-cost humanoid platform
composed of low-torque servomotors and
low-precision mechanical structures. It has
been developed as a humanoid platform
that can widely be used by many RoboCup
researchers in the world. Figure A shows
the whole view and the mechanical architecture of PINO.
It is intentionally designed to have lowtorque motors and low-precision mechanical structures because such motors and
mechanical structures significantly reduce
production cost. Although many humanoid robots use high-performance motor
systems to attain stable walking, such
motor systems tend to be expensive.
Motors that are affordable for many
researchers have only limited torque and
accuracy. Development of a method that
allows biped walking using low-cost components would have a major impact on the
research community as well as industry. In
the past, many researchers have studied a
simple planar walker without any control
torque (McGeer 1990). In such methods,
walking motions are decided by the relationship between a gravity potential effect
and structural parameters of the robots.
Thus, there is no control over walking
behaviors such as speed and dynamic
change in step size.
In the biped walking method, we started
with the hypothesis that the walker can
change the walking speed without changing the step length if the moment of inertia
of the swing leg at the hip joint has adequately been changed. We designed a control method using the moment of inertia of
the swing leg at the hip joint. The method
was applied to the torso of the PINO model
in computational simulations and confirmed that the method enables stable
walking with limited torque.
A cycle of biped walking can be subdivided into several phases: (1) two-leg supporting, (2) one-leg supporting, and (3)
landing. Both legs are grounded in the twoleg supporting phase and landing phase,
whereas only one leg is grounded in the
one-leg supporting phase. In conventional
biped walking algorithms, knees are always
bent so that motors are continuously highly loaded. This approach is very different
from normal human walking postures. It
should be noted that most of the current
control methods for humanoid walking are
designed independently of the structural
properties of the robot hardware. In general, these control methods require extremely large torque to realize desired walking
patterns. Although knees are bent when
walking on uneven terrain or major
weights are loaded, the legs are stretched
straight when walking on a flat floor. This
posture can easily be modeled by inverted
pendulum, which is known to be energy
efficient. In addition, movement of the torso affects the overall moment of inertia
and, thus, affects energy efficiency. Our
goal is to mimic human walking posture to
minimize energy through a combination of
an inverted pendulum controlled by a
swing leg and feedback control of torso
movement.
The basic idea behind the low-energy
walking method is to consider legs of
humanoid robots, during the one-leg supporting phase, as a combination of an
inverted pendulum model and a two–
degree-of-freedom (DOF) pendulum model, assuming the structure of PINO to be a
erogeneous players and the introduction of a
standardized coach language.
Heterogeneous Players
Heterogeneous
players were introduced to the RoboCup simu-
60
AI MAGAZINE
Link4
Link4
l4
leg
θ4
body
Link2
l2
Link2
˘2
l3
Link3
l1
Link3
Link1
θ3
θ1
Figure B. Planar Four-Link Model of the Robot.
planar walker. In this case, the inverted
pendulum represents the supporting leg,
and the two DOF pendulum represents the
swing leg. The inverted pendulum model is
the most energy-efficient model of the supporting leg.
Figure B shows the four-link model with
torso. This model consists of link1, link2,
link3, and link4; link1 has a joint with the
ground. We define every joint angle θ1, θ2,
θ3, θ4 as an absolute angle of link1, link2,
link3, and link4, respectively. We assume
that every joint has a viscosity coefficient
of 0.01 [N ⋅ m ⋅ s/ rad] and that the knee
joint also has a knee stopper. Each link has
uniformly distributed mass m1, m2, m3, and
m4, respectively. Table A shows the link
parameters of the four-link model that are
obtained from the real PINO.
m1
m2
m3
m4
[kg]
[kg]
[kg]
[kg]
0.718
0.274
0.444
3.100
l1
l2
l3
l4
[m]
[m]
[m]
[m]
0.2785
0.1060
0.1725
0.4515
Table A. Link Parameters.
Given the control method to verify
these hypotheses (Yamasaki et al. 2001),
parameter spaces were searched to identify
an optimal parameter set. Optimal solutions were found for three cases: (1) torso
movement is controlled by feedback from
body and leg movement, (2) torso is fixed
vertically, and (3) the three-link model
without torso is compared with the fourlink model with torso.
lator for the first time this year in version 7.0 of
the simulator.1 In previous versions, teams
could consist of players with different behaviors, but their physical parameters, such as size,
Articles
Left: Figure C. Result of the Foot Gait of Case 1. Middle: Figure D. Result of the Foot Gait of Case 2.
Right: Figure E. Result of the Foot Gait of Case 3.
Figures C, D, and E show the foot trajectory for each case.. Table
B shows
the initial
.
.
.
angular velocity θ1, θ2, θ3, θ4 time to touch
down t2 and energy consumption. From
table B, t2 of the four-link model with torso
is longer than that of the three-link model
without torso t2, and energy consumption
of case 1 is smaller than that of case 2,
although every angular speed is larger.
From these results, we can verify that the
walking motion with appropriate swings of
the torso enables the robot to walk with
lower energy consumption.
We chose the moment of inertia of the
swing leg at the hip joint, and we applied
feedback torque τleg = –klegϕ to the hip joint.
As a result, in the lower-limb model of PINO,
the maximum torque required was reduced
to the range of approximately 0.2 [N ⋅ m]
(at k = 0.13) to 0.35 [N ⋅ m] (at k = 0.22).
This enables the low-cost humanoid PINO to
perform reasonably stable biped walking.
Further, in the four-link model with torso, it was verified that appropriate swings
of the torso enable the robot to walk with
lower energy consumption, as low as 0.064
[J].
In this study, we observed the interesting relationship between the control parameters and the walking behaviors, but
understanding the details of the mechanism that realize such behaviors is our
future work. This study demonstrates that
the energy efficiency of humanoid walking
Value
.
θ.1
θ.2
θ.3
θ4
[rad/sec]
[rad/sec]
[rad/sec]
[rad/sec]
[sec]
t2
Energy consumption [J]
Case 1
1.736
1.692
0.000
1.309
0.319
0.064
Case 2
0.962
0.223
0.000
—
0.406
0.109
Case 3
3.374
1.384
0.000
—
0.296
0.025
Table B. Results of Three Cases.
can be altered when whole body motion is
appropriately used. This is an important
insight toward achieving practical humanoid robots for low-cost production as well
as high-end humanoid seeking for
ultra–high performance using whole-body
movement.
Acknowledgments
The dynamic simulation was supported by
Masaki Ogino. The authors thank him and
members of the Asada Laboratory at Osaka
University.
OPENPINO
All technical information on PINO is now
available under GNU General Public
License and GNU Free Document License
as OPENPINO (exterior design and trademarks are not subjects of GNU license). It is
intended to be an entry-level research platform for possible collective efforts to further develop humanoid robots for additional research. Authors expect the LINUXlike community is building around OPENPINO.
speed, and stamina, were all identical. This
year, teams could choose from among players
with different physical characteristics. In particular, in any given game, each team was able
to select from identical pools of players, including the default player type from years past and
— Fuminori Yamasaki
Ken Endo
Minoru Asada
Hioraki Kitano
six randomly generated players. At start-up,
teams were configured with all default players.
However, the autonomous online coach could
substitute in the randomly generated players
for any player other than the goalie. The only
restriction is that each random player type
SPRING 2002 61
Articles
Figure 4. The RoboCup-2001 Simulation Leagues.
could be assigned to, at most, three teammates.
The random players are generated by the
simulator at start-up time by adjusting five
parameters, each representing a trade-off in
player abilities: (1) maximum speed versus stamina recovery, (2) speed verus turning ability,
(3) acceleration versus size, (4) leg length versus kick accuracy, and (5) stamina versus maximum acceleration. These parameterizations
were chosen with the goal of creating interesting research issues regarding heterogeneous
teams without creating a large disadvantage for
teams that chose to use only default players. At
the outset, it was not known whether using
heterogeneous players would be advantageous.
Experimentation leading to the competition
established that using heterogeneous players
could provide an advantage of at least 1.4 goals
a game over using only the default players
(Stone 2002).
Indeed, at least one of the top-performing
teams in the competition (UvA Trilearn from
the University of Amsterdam—fourth place)
took good advantage of the heterogeneous
62
AI MAGAZINE
players, with some observers commenting on
their speedy players that they positioned on
the sides of the field.
Standardized Coach Language
Past
RoboCup simulator competitions have allowed
teams to use an omniscient autonomous coach
agent. This coach is able to see the entire field
and communicate with players only when the
play is stopped (for example, after a goal or for
a free kick). Typically, each team developed its
own communication protocol between the
players and the coach.
This year, a standardized coach language was
introduced with the goal of allowing a coach
from one team to interact with the players
from another. The standardized language has a
specific syntax and intended semantics. Teams
had an incentive to use this language because
messages encoded in the standardized language could be communicated even when the
ball was in play (although with some delay and
frequency limit to prevent coaches from
“micromanaging” their players).
One offshoot of introducing a standardized
Articles
Figure 5. A RoboCup Junior Game.
language was that an auxiliary competition
could be introduced: a coach competition. In
this competition, entrants provided only a
coach that was paired with a previously
unknown team that is able to understand the
standardized coach language. Entrants were
judged based on how well this unknown team
could perform against a common opponent
when coached by the entrant’s coach program.
Results
For the second year in a row, a firsttime entrant won the RoboCup simulator competition: tsinghuaeolus from Tsinghua University in China. They beat the brainstormers
from Karlsruhe University in Germany by a
score of 1–0, scoring the lone goal of the game
during the third overtime period. The winners
of the inaugural coach competition were the
ChaMeleons from Carnegie Mellon University
and Sharif-Arvand from Sharif University of
Technology in Iran.
Rescue Simulation
RoboCup-2001 hosted the inaugural RoboCup
rescue simulation competition, which was
chaired by Satoshi Tadokoro. The basis of
RoboCup rescue is a disaster rescue scenario in
which different types of rescue agents—firefighters, police workers, and ambulance workers—attempt to minimize the damage done to
civilians and buildings after an earthquake.
The setting was a portion of Kobe, Japan, the
site of a recent devastating earthquake.
The simulator included models that cause
buildings to collapse, streets to be blocked, fires
to spread, and traffic conditions to be affected
based on seismic intensity maps. Each participant had to create rescue agents for each of the
three types as well as one control center for each
type of agent (that is, a fire station, a police station, and a rescue center). The agents sense the
world imperfectly and must react to dynamically changing conditions by moving around the
world, rescuing agents, and putting out fires
according to their unique capabilities. Communication among agents of different types is
restricted to going through the control centers.
SPRING 2002 63
Articles
RoboCup-2001 Engineering Challenge Award
Fast Object Detection in Middle-Size RoboCup
Fast and reliable analysis of image data is
one of the key points in soccer robot performance. To make a soccer robot act fast
enough in a dynamically changing environment, we will reduce the number of
sensors as much as possible and design fast
software for object detection and reliable
decision making. Therefore, in RoboCup,
we think it is worth getting fast and almost
correct results rather than slow and exact.
To achieve this goal, we propose three
ideas: (1) a new color model, (2) object
detection by checking image color on a set
of jump points in the perspective view of
the robot front camera, and (3) a fast
method for detecting edge points on
straight lines. The other details of our robot
(that is, its mechanical design, hardware
control, and software) is given in Jamzad et
al. (2000).
A New Color Model
We propose a new color model named HSY
(the H is from CIELab, S from HSI, and Y
from Y IQ color models [Sangwine and
Horne 1998]). The reason for this selection
is that the component Y in Y IQ converts a
color image into a monochrome one.
Therefore, comparing with I in HSI, which
is the average of R, G, and B, Y gives a better mean for measuring the pixel intensity.
The component S in HSI is a good measure
for color saturation. Finally, the parameter
H in CIELab is defined as follows:
H = tan–1b*/a*
where a* denotes relative redness-greenness, and b* shows yellowness-blueness
(Sangwine and Horne 1998). H is a good
measure for detecting regions matching a
given color (Gong and Sakauchi 1995),
which is exactly the case in RoboCup
where we have large regions with a single
color.
Object Detection in
Perspective View
In the real world, we see everything in perspective: Objects far away from us are seen
small, and those closer up are seen larger.
This view is true for cameras as well. Figure
64
AI MAGAZINE
Left: Figure A. Position of Jump Points in a Perspective View of the Robot.
Right: Figure B. An Illustration of Ball Segmentation by a Surrounding Rectangle.
1 shows an image of the RoboCup middlesize soccer field with certain points on it.
The points that are displayed in perspective
to the robot’s front camera are called jump
points. They have equal spacing on each
perspective horizontal line. Their vertical
spacing is related to the RoboCup soccer
field size. The actual spacing between jump
points is set in such a way that at least five
jump points are located on a bounding box
surrounding the ball (which is the smallest
object on the soccer field), no matter how
far or how close the ball is. By checking the
image color only at these jump points,
there is a high probability that we could
find the ball. In our system, we obtained
satisfactory results with 1200 jump points.
To search for the ball, we scan the jump
points from the lower right point toward
the upper left corner. At each jump point,
the HSY equivalent of the RGB values is
obtained from a lookup table. Because we
have defined a range of HSY for each of the
standard colors in RoboCup, we can easily
assign a color code to this HSY value. If a
jump point is red, then it is located on the
ball. Because this jump point can be any
point on the ball, from this jump point, we
can move toward right, left, up, and down,
checking each pixel for its color. As long as
the color of the pixel being checked is red,
we are within the ball area. This search
stops in each direction when we reach a
border point that is a nonred pixel. In one
scan of all jump points in a frame, in addition to a red ball, we can find all other
objects, such as robots, the yellow goal,
and the blue goal. For each object, we
return its descriptive data, such as color,
size, and the coordinate of its lower-left
and upper-right corner of its surrounding
rectangle and a point Q on the middle of its
lower side (figure 2). Point Q is used to find
the distance and angle of the robot from
this object.
Straight-Line Detection
During the match, there are many cases
when the robot needs to find its distance
from walls. In addition, the goal keeper at
all times needs to know its distance from
walls and also from white straight lines in
front of the goal. Because the traditional
edge-detection methods (Gonzalez and
Woods 1993) are very time consuming in
real-time situation, we propose a very simple and fast method to find the edge points
on straight lines as follows:
As seen in figure 3, to locate points on
the border of the wall and the field, we
select a few points on top of the image
(these points are on the wall) and assume a
drop of water is released at each point. If no
object is on its way, the water will drop on
the field, right on the border with the wall.
To implement this idea, from a start point
wi we move downward until reaching a
green point fi. All candidate border edge
points are passed to Haugh transform (Gonzalez and Woods 1993) to find the straightline equation that best fits these points.
Articles
wn
w3
w2
f2
fn
w1
wall
f1
field
f3
robot
not able to respond in real-time speed. To
overcome this processing speed problem,
we preferred to have a nonexact, but reliable, solution to the vision problem.
Fast object detection was achieved by
checking the color of pixels at jump points
and defining a rectangular shape bounding
box around each detected object. To simplify the calculations, the distance and
angle of an object from the robot is estimated to be that of this rectangle.
Although we obtained satisfactory
results from our method in real soccer
robot competitions, we believe the combination of a CCD camera in front and an
omnidirectional viewing system on top of
the robot can give a more reliable performance, especially for localization.
Figure C. An Illustration of a Robot View.
Straight lines w ifi show the pass of water dropped from the top of the wall.
Conclusion
In a dynamically changing environment
such as RoboCup, where most objects are
moving around most of the time, we need
near–real-time (25 frames a second)
response from the robot vision system for
fast decision making. Although the traditional methods of image processing for segmentation, edge detection, and object findings are very accurate,with the processing
capabilities of PCs today, these methods are
RoboCup rescue has many things in common with RoboCup soccer. It is a fully distributed, multiagent domain with hidden state,
noisy sensors and actuators, and limited communication opportunities. RoboCup rescue
introduces the challenges of scaling up to
many more agents and coordinating multiple
hierarchically organized teams.
In the competition, competing agents operate simultaneously in independent copies of
the world. That is, they don’t compete against
each other directly but, rather, compare their
performance under similar circumstances. The
scoring metric is such that human lives saved is
the most important measure, with injuries and
building damage serving to break ties.
Seven teams competed in the 2001 RoboCup
rescue competition. The winning team was
YABAI from the University of Electro-Communications in Japan.
RoboCup Junior
RoboCup junior (figure 5) aims to develop educational methods and materials using robotics
emanating from the RoboCup soccer theme.
Following on the success of activities held at
— M. Jamzad
B. S. Sadjad
V. S. Mirrokni
M. Kazemi
H. Chitsaz
A. Heydarnoori,
M. T. Hajiaghai
E. Chiniforooshan
RoboCup-2000, this year, RoboCup Junior
hosted 24 teams totaling nearly 100 participants from the local Washington area and other U.S. states as well as Australia, Germany, and
the United Kingdom.
RoboCup Junior 2001, chaired by Elizabeth
Sklar, included three challenges: (1) soccer, (2)
rescue, and (3) dance. These categories are
designed to introduce different areas within
the field of robotics, such as operation within
static versus dynamic environments, coordination in multiplayer scenarios, and planning
with incomplete information. The challenges
also emphasize both competitive and collaborative aspects for the teams, both within the
games and throughout preparation. In particular, the dance challenge allows students to
bring creativity in terms of art and music to an
event that traditionally focuses on engineering.
Extensive feedback and analysis has been
made through interviewing students and mentors who have participated in RoboCup junior
events. This research by committee members
and collaborators is ongoing and involves evaluation of the effectiveness of educational team
robotics.
SPRING 2002 65
Articles
Figure 6. PINO, the Humanoid Robot.
Humanoid Exhibition
The humanoid exhibition, chaired by
Dominique Duhaut, had only one participant—the PINO team—and received major attention. PINO is a small-size (70 centimeters in
height) humanoid robot that can walk and follow the ball (figure 6). It was developed by ERATO Kitano Symbiotic Systems Project, which is
a five-year government-founded project in
Japan. A paper describing biped walking control
for PINO won this year’s Scientific Challenge
Award for development of an energy minimum
walking method (see the sidebar). An interesting feature of PINO is that it was designed to be
open platform for humanoid research and only
uses low-cost off-the-shelf components.2
Conclusion
RoboCup-2001 continued the ongoing, growing research initiative that is RoboCup.
RoboCup-2002 will take place in June 2002 in
Fukuoka, Japan, and Pusan, South Korea.3
Acknowledgments
We thank the full organizing committee of
66
AI MAGAZINE
RoboCup-2001).4 Special thanks to all the participating teams (see sidebar), without whom
RoboCup would not exist. Thanks also to Elizabeth Sklar for input on the RoboCup junior
section.
Notes
1. Mao Chen, Ehsan Foroughi, Fredrik Heintz, Spiros
Kapetanakis, Kostas Kostiadis, Johan Kummeneje,
Itsuki Noda, Oliver Obst, Patrick Riley, Timo Steffens,
YiWang, and Xiang Yin. Users manual: RoboCup soccer server manual for SOCCER SERVER 7.07 and later.
Available at http://sourceforge.net/projects/sserver/.
2. All technical information on PINO is now available
under GNU General public licensing as OpenPINO
platform PHR-001 (www.openpino.org/).
3. For more information, visit www.robocup. org.
4. See www.robocup.org/games/01Seattle/315.html)
as well as the RoboCup executive committee
(www.cs.cmu.edu/~robocup2001/robocup-federation.html
References
Gong, Y., and Sakauchi, M. 1995. Detection of
Regions Matching Specified Chromatic Features.
Computer Vision and Image Understanding 61(2):
163–269.
Articles
Soccer-Simulation League
11MONKEYS3, Keio University, Japan, Keisuke
Suzuki
ANDERLECHT, IRIDIA-ULB, Belgium, Luc Berrewaerts
AT HUMBOLDT 2001, Humboldt University Berlin,
Germany, Joscha Bach
A-TEAM, Tokyo Institute of Technology, Japan,
Hidehisa Akiyama,
ATTUNITED-2001, AT& T Labs-Research, USA,
Peter Stone
BLUTZLUCK, University of Leuven, Belgium, Josse
Colpaert
CHAMELEONS’01, Carnegie Mellon University,
USA, Paul Carpenter
CROCAROOS, University of Queensland, Australia,
Mark Venz
CYBEROOS2001, CSIRO, Australia, Mikhail
Prokopenko
DIRTY DOZEN, Institute for Semantic Information
Processing, Germany, Timo Steffens
DRWEB (POLYTECH), State Technical University,
Russia, Sergey Akhapkin
ESSEX WIZARDS, University of Essex, Huosheng Hu
FC PORTUGAL 2000, Universidades do Porto e
Aveiro, Portugal, Luis Paulo Reis
FC PORTUGAL 2001, Universidade de Aveiro, Portugal, Nuno Lau
FCTRIPLETTA, Keio University, Japan, Norihiro
Kawasaki
FUZZYFOO, Linkopings Universitet, Sweden,
Mikael Brannstrom
GEMINI, Tokyo Institute of Technology, Japan,
Masayuki Ohta
GIRONA VIE, University of Girona, Spain, Israel
Muñoz
HARMONY, Hokkaido University, Japan, Hidenori
Kawamura
HELLI-RESPINA 2001, Allameh Helli High School,
Iran, Ahmad Morshedian
JAPANESE INFRASTRUCTURE TEAM, Future UniversityHakodate, Japan, Hitoshi Matsubara
KARLSRUHE BRAINSTORMERS, Universitaet Karlsruhe,
Germany, Martin Riedmiller
LAZARUS, Dalhousie University, Canada, Anthony
Yuen
LIVING SYSTEMS, Living Systems, Germany, Klaus
Dorer
LUCKY LUBECK, University of Lubeck, Germany,
Daniel Polani
MAINZ ROLLING BRAINS, Johannes Gutenberg University, Germany, Felix Flentge
NITSTONES, Nagoya Institute of Technology,
Japan, Nobuhiro Ito
OULU 2001, University of Oulu, Finland, Jarkko
Kemppainen
PASARGAD, AmirKabir University of Technology,
Iran, Ali Ajdari Rad
RMIT GOANNAS, RMIT, Australia, Dylan Mawhinney
ROBOLOG 2001, University of Koblenz, Germany,
Frieder Stolzenburg
SALOO, AIST/JST, Japan, Itsuki Noda
SBCE, Shahid Beheshti University, Iran, Eslam
Nazemi
SHARIF-ARVAND, Sharif University of Technology,
Iran, Jafar Habibi
TEAM SIMCANUCK, University of Alberta, Canada,
Marc Perron
TSINGHUAEOLUS, Tsinghua University, P. R. China,
Shi Li
TUT-GROOVE, Toyohashi University of Technology,
Japan, Watariuchi Satoki
UTUTD, University of Tehran, Iran, Amin Bagheri
UVA TRILEARN 2001, Universiteit van Amsterdam,
The Netherlands, Remco de Boer
VIRTUAL WERDER, University of Bremen, Germany,
Ubbo Visser
WAHOO WUNDERKIND FC,
University of Virginia,
USA, David Evans
WRIGHTEAGLE2001, University of Science and
Technology of China, P. R. China, Chen XiaoPing
YOWAI2001, University of Electro-Communications, Japan, Koji Nakayama
ZENG01, Fukui University, Japan, Takuya Morishita
Rescue-Simulation League
ARIAN,
University of Technology, Iran, Habibi
Jafar Sharif
GEMINI-R, Tokyo Institute of Technology, Japan,
Masayuki Ohta
JAISTR, Japan Advanced Institute of Science and
Technology, Japan, Shinoda Kosuke
NITRESCUE, Nagoya Institute of Technology,
Japan, Taku Sakushima
RESCUE-ISI-JAIST, University of Southern California,
USA, Ranjit Nair
RMIT ON FIRE, RMIT University Australia, Lin
Padgham
YABAI, University of Electro-Communications,
Japan, Takeshi Morimoto
Small-Size Robot League
4 STOOGES, University of Auckland, New Zealand,
Jacky Baltes
5DPO, University of Porto, Portugal, Paulo Costa
CM-DRAGONS’01, Carnegie Mellon University, USA,
Brett Browning
CORNELL BIG RED, Cornell University, USA, Raffaello D’Andrea
FIELD RANGERS, Singapore Polytechnic, Singapore,
Hong Lian Sng
FU FIGHTERS, Universitt Berlin, Germany, Sven
Behnke Freie
FU-FIGHTERS-OMNI, Universitt Berlin, Germany,
Raul Rojas Freie
HWH-CATS, College of Technology and Commerce,
Taiwan, R.O.C., Kuo-Yang Tu Hwa Hsia
KU-BOXES2001, Kinki University, Japan, Harukazu
Igarashi
LUCKY STAR II, Singapore Polytechnic, Singapore,
Ng Beng Kiat
OMNI, Osaka University, Japan, Yasuhiro Masutani
OWARIBITO, Chubu University, Japan, Tomoichi
Takahashi
ROBOROOS 2001, University of Queensland, Australia, Gordon Wyeth
ROBOSIX UPMC-CFA, University Pierre and Marie
Curie, France, Ryad Benosman
ROGI TEAM, University of Girona, Spain, Bianca
Innocenti Badano
ROOBOTS, The University of Melbourne, Australia,
Andrew Peel
SHARIF CESR, Sharif University of Technology, Iran,
Mohammad T. Manzuri
TEAM CANUCK, University of Alberta, Canada,
Hong Zhang
TPOTS, Temasek Engineering School, Singapore,
Nadir Ould Khessal
UW HUSKIES, University of Washington, USA, Dinh
Bowman
VIPERROOS, University of Queensland, Australia,
Mark Chang
Middle-Size Robot League
AGILO ROBOCUPPERS, Munich University of Technology, Germany, Michael Beetz
ARTISTI VENETI, Padua University, Italy, Enrico Pagello
CLOCKWORK ORANGE, University of Technology, The
Netherlands, Pieter Jonker Delft
CMHAMMERHEADS’01, Carnegie Mellon University,
USA, Tucker Balch
COPS STUTTGART, University of Stuttgart, Germany, Reinhard Lafrenz
CS FREIBURG, University of Freiburg, Germany,
Bernhard Nebel
EIGEN, Keo University, Japan, Kazuo Yoshida
FUN2MAS, Politecnico di Milano, Italy, Andrea
Bonarini
FUSION, Fukuoka University, Japan, Matsuoka
Takeshi
GMD-ROBOTS, GMD-AIS, Germany, Ansgar Bredenfeld
ISOCROB, Instituto de Sistemas e Robtica, Portugal, Pedro Lima
JAYBOTS, Johns Hopkins University, USA, Darius
Burschka
MINHO, University of Minho, Portugal, António
Ribeiro
ROBOSIX, University Pierre and Marie Curie,
France, Ryad Benosman
SHARIF CE, Sharif University of Technology, Iran,
Mansour Jamzad
SPQR, University “La Sapienza,” Italy, Luca Iocchi
THE ULM SPARROWS, University of Ulm, Germany,
Gerhard Kraetzschmar
TRACKIES, Osaka University, Japan, Yasutake Takahashi
Sony Legged-Robot League
ARAIIBO,
University of Tokyo, Japan, Tamio Arai
Fukuoka Institute of Technology, Japan,
Takushi Tanaka
BABYTIGERS 2001, Osaka University, Japan, Minoru
Asada
CERBERUS, Bogazici University, Turkey, Levent
Akin, and Technical Univ. of Sofia, Bulgaria,
Anton Topalov
CMPACK’01, Carnegie Mellon University, USA,
Manuela Veloso
ESSEX ROVERS, University of Essex, UK, Huosheng
Hu
GERMAN TEAM, Humboldt University Berlin, Germany, Hans- Dieter Burkhard
LES 3 MOUSQUETAIRES, LRP, France, Pierre Blazevic
MCGILL REDDOGS, McGill University, Canada,
Jeremy Cooperstock
ROBOMUTTS++, The University of Melbourne, Australia, Nick Barnes
SPQR-LEGGED, University “La Sapienza,” Italy,
Daniele Nardi
TEAM SWEDEN, Orebro University, Sweden, Alessandro Saffiotti
UNSW UNITED, University of New South Wales,
Australia, Claude Sammut
UPENNALIZERS, University of Pennsylvania, USA,
Jim Ostrowski
USTC WRIGHT EAGLE, University of Science and
Technology of China, P. R. China, Xiaoping
Chen
UW HUSKIES, University of Washington, USA,
Dieter Fox
ASURA,
Robot Rescue League
EDINBURGH,
University of Edinburgh, UK, Daniel
Farinha
SHARIF-CE, Sharif University of Technology, Iran,
Amir Jahangir
SWARTHMORE, Swarthmore College, USA, Gil Jones
UTAH, Utah State University, USA, Dan Stormont
MINNESOTA, University of Minnesota, USA, Paul
Rybski
FLORIDA, University of South Florida, USA, Robin
Murphy
Table 1. RoboCup 2001 Teams.
SPRING 2002 67
Articles
Gonzalez, R. C., and Woods, R. E. 1993. Digital Image
Processing. Reading, Mass.: Addison-Wesley.
Jamzad, M.; Foroughnassiraei, A.; Chiniforooshan,
E.; Ghorbani, R.; Kazemi, M.; Chitsaz, H.; Mobasser,
F.; and Sadjed, S. B. 2000. Middle-Sized Robots:
ARVAND . In RoboCup-99: Robot Soccer World Cup II,
74–84. Stockholm: Springer.
Kitano, H., and Asada, M. 1998. RoboCup Humanoid
Challenge: That’s One Small Step for a Robot, One
Giant Leap for Mankind. In Proceedings of the International Conference on Intelligent Robots and Systems. Washington, D.C.: IEEE Computer Society.
Kitano, H.; Tambe, M.; Stone, P.; Veloso, M.; Coradeschi, S.; Osawa, E.; Matsubara, H.; Noda, I.; and Asada, M. 1997. The RoboCup Synthetic Agent Challenge ‘97. In Proceedings of the Fifteenth
International Joint Conference on Artificial Intelligence, 24–29. Menlo Park, Calif.: International Joint
Conferences on Artificial Intelligence.
McGeer, T. 1990. Passive Dynamic Walking. International Journal of Robotics Research 9(2): 62–82.
Noda, I.; Matsubara, H.; Hiraki, K.; and Frank, I.
1998. SOCCER SERVER: A Tool for Research on Multiagent Systems. Applied Artificial Intelligence 12:233–
250.
Sangwine, S. J., and Horne, R. E. N. 1998. The Colour
Image-Processing Handbook. New York: Chapman and
Hall.
Stone, P. 2002. ATTUnited-2001: Using Heterogeneous Players. In RoboCup-2001: Robot Soccer World
Cup V, eds. A. Birk, S. Coradeschi, and S. Tadokoro.
Berlin: Springer Verlag. Forthcoming.
Yamasaki, F.; Endo, K.; Asada, M.; and Kitano, H.
2001. A Control Method for Humanoid Biped Walking with Limited Torque. Paper presented at the Fifth
International Workshop on RoboCup, 7–10 August,
Seattle, Washington.
Yamasaki, F.; Matsui, T.; Miyashita, T.; and Kitano, H.
2000. PINO, The Humanoid That Walks. In Proceedings of the First IEEE-RAS International Conference
on Humanoid Robots. Washington, D.C.: IEEE Computer Society.
Yamasaki, F.; Miyashita, T.; Matsui, T.; and Kitano, H.
2000. PINO, the Humanoid: A Basic Architecture.
Paper presented at the Fourth International Workshop on RoboCup, 31 August–1 September, Melbourne, Australia.
Manuela Veloso is associate professor of computer science at Carnegie
Mellon University (CMU). She
received her Ph.D. in computer science from CMU in 1992. She
received a B.S. in electrical engineering in 1980 and an M.Sc. in
electrical and computer engineering in 1984 from the Instituto
Superior Tecnico in Lisbon. Veloso’s long-term
research goal is the effective construction of teams of
intelligent agents where cognition, perception, and
action are combined to address planning, execution,
and learning tasks, in particular, in uncertain, dynamic, and adversarial environments. Veloso has devel-
68
AI MAGAZINE
oped teams of robotic soccer agents that have been
RoboCup world champions in three different leagues,
namely, simulation (1998, 1999), CMU-built smallwheeled robots (1997, 1998), and Sony four-legged
dog robots (1998). Veloso is vice president of the
RoboCup International Federation. She was awarded
a National Science Foundation Career Award in 1995
and the Allen Newell Medal for Excellence in Research
in 1997. Her e-mail address is veloso@cs.cmu.edu.
Tucker Balch is an assistant professor of computing at the Georgia
Institute of Technology. He has
been involved in RoboCup since
1997. Balch competed in robotic
and simulation leagues, chaired
the organization of the small-size
league in 2000, and served as associate chair for robotic events for
RoboCup-2001. Balch is also a trustee of the
RoboCup Federation. Balch’s research focuses on
coordination, communication, and sensing for multiagent systems. He has published over 60 technical
articles in AI and robotics. His book, Robot Teams,
edited with Lynne Parker, will be published in 2002.
His e-mail address is tucker@cc.gatech.edu.
Peter Stone is a senior technical
staff member in the Artificial Intelligence Principles Research Department at AT&T Labs Research.
He received his Ph.D. in 1998 and
his M.S. in 1995 from Carnegie
Mellon University, both in computer science. He received his B.S.
in mathematics from the University of Chicago in 1993. Stone’s research interests
include planning and machine learning, particularly
in multiagent systems. His doctoral thesis research
contributed a flexible multiagent team structure and
multiagent machine learning techniques for teams
operating in real-time noisy environments in the
presence of both teammates and adversaries. He is
currently continuing his investigation of multiagent
learning at AT&T Labs. His e-mail address is
pstone@research.att.com.
Hiroaki Kitano is a senior researcher at Sony Computer Science Laboratories, Inc.; director of
ERATOKitano Symbiotic Systems
Project, Japan Science and Technology Corporation, a government organization for basic science; and president of The
RoboCup Federation. Kitano was a
visiting researcher at Carnegie Mellon University
from 1988 to 1993 and received a Ph.D. in computer
science from Kyoto University in 1991. Kitano
received The Computers and Thought Award from
IJCAI in 1993 and the Prix Ars Electronica in 2000.
His e-mail address is kitano@symbio.jst.go.jp.