Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Rescue Robot 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Proceedings of the 2008 IEEE

International Conference on Robotics and Biomimetics


Bangkok, Thailand, February 21 - 26, 2009

On the Design and Development of A Rough Terrain


Robot for Rescue Missions
J. Suthakorn*, S.S.H. Shah , S. Jantarajit , W. Onprasert and W. Saensupo,
S. Saeung, S. Nakdhamabhorn,V. Sa-Ing, and S. Reaungamornrat
Center for Biomedical and Robotics Technology (www.bartlab.org)
Faculty of Engineering, Mahidol University, Salaya, Nakorn Pathom, Thailand
*Corresponding Address: Email egjst@mahidol.ac.th

Abstract - Rescue Robots play an important role Most victims in 9/11 died due to the delay of
during rescue missions in disasters such as 9/11, which assistance. In such conditions, the victims’ locations
caused more than 2,000 deaths and thousands of injuries. and conditions were difficult to determine by rescue
However, tele-operating rescue robots are unable to crews. Several researchers and academic staff,
perform their tasks constantly due to the limitation of
consequently, have paid more attention in conducting
current wireless communication technology. Therefore,
rescue robots with the capability of performing their tasks research to develop rough terrain robots, especially for
autonomously during temporarily lost connections to the rescue missions. Such rescue robots are able to perform
control base would be ideal. This paper introduces our their tasks in high-risk and dangerous places. The
development of a semi-autonomous rough terrain robot robots are able to supply images of the environment
for rescue missions. The robot’s hardware components, and specify victims’ locations to the robot operators at
system architecture, and software architecture are the control base outside the wrecked area.
described in order to provide a general overview of our
robot. An alternative and comprehensive map-generating B. Related Work
algorithm is presented and discussed. Finally,
experimental setup and results from a testing arena are To develop a rescue robot, the key features worth
reported. considering are mobility and map generation. Sheh [1],
for example, was interested in increasing the mobility
Index Terms – rescue robot, rough terrain robot, semi-
of a robot. A toy “Tarantula” was modified to be a
autonomous robot, mobile robot, path planning, SLAM
rescue robot called “The Redback.” The robot had
improved mobility because of its small size and light
I. INTRODUCTION weight. S. Thrun and his co-workers [2] introduced the
FastSLAM algorithm (A Factored Solution to the
A fully autonomous rescue robot with capabilities of Simultaneous Localization and Mapping Problem) to
self-navigating, victim searching and rescue-plan help increase its accuracy in map generation.
generating would be an ideal for everyone involved
with search and rescue actions. Rescue robots sometimes work as a group robot.
Vargas and his colleagues [3] constructed three
A. The Statement of Problem cooperative mobile robots capable of detecting
casualties in disasters. From another perspective, Birk
Disasters (both natural hazards and man-made
and his colleagues [4] developed a rescue robot to
catastrophes) have brought loss, grief, and starvation to
examine a collapsing building in city areas and find
survivors. An example of a deadly natural calamity was
victims. Vincent and Trentini [5] introduced a robot
the Great Chilean Earthquake, which took place in
which was able to understand obstacle shapes and was
Spain in May 1960; roughly 5,000 people died from
able to climb over them rather than avoid them by
both earthquake and resulting Tsunamis. In May 2008
applying image segmentation and shape detection
more than 50,000 Chinese citizens lost their lives, and
algorithms.
over 20,000 people were missing as a result of the
massive devastative earthquake. Man-made disasters In contrast with manually controlled robots,
are also an important motivation, such as the grievous autonomous rescue robots must decide their travel
incident happening in USA on 11 September 2001 paths in many circumstances which direct them to
(9/11). reach and gain victim information. This requirement
leads several researchers to develop intelligent terrain

978-1-4244-2679-9/08/$25.00 ©2008 IEEE 1830


robots. Birk and Kenn [6], for instance, have developed a 40 degree slope (Fig. 3). The robot navigates by
a semi-autonomous rescue robot to overcome the using information from a laser scanner, creates maps,
limitation of wireless communication that impede finds victims and sends information about the locations
constant control over the robot. In addition, Pellenz of casualties to the station via wireless communication
and his team [7] have developed a rescue robot to (IEEE 802.11a) .
perform in either autonomous or tele-operating mode.

This paper presents the design and development of a


rough terrain robot, “Tehzeeb” (Fig. 1). The content is
divided into three parts: System Descriptions, Map
Generation, and Experimental Results.

Fig. 3 The Tehzeeb is on a 45 degree testing slope

A. Hardware Components

Fig. 1 The Tehzeeb Rescue Robot

II. SYSTEM DESCRIPTIONS


This section presents the robot descriptions which
can be separated into three parts; 1) Hardware
Components, 2) System Architecture, and 3) high-level
Software Architecture. The Tehzeeb rescue robot can
be separated into four parts: front and rear arms, robot Fig. 4 Hardware Components
body, manipulators for carrying victim sensors and
camera, and the electronic compartment (Fig. 2).
The Tehzeeb robot is equipped with several sensors,
i.e. USB cameras, heat sensor, Carbon dioxide
detector, and microphone. These sensors detect
information about the victims. All of them are located
on an onboard-manipulator. Fig. 4 illustrates the
hardware components of the robot. There are various
sensors that are responsible for tracking the robot
position and orientation i.e. an accelerometer, a
compass, and a laser scanner. The scanner is on a level-
stabilizer to accurately measure the distances between
robot and environments. The distance data is used to
generate the surrounding map (detail is discussed in the
Fig. 2 The electronic compartment following subsection.) The battery compartment is
located at the center of the robot for better stability. A
Tehzeeb is a 30 kilogram mobile robot with a length of USB camera is used to capture rear images where the
97 centimeters (when both front and rear arms are back arms could be seen through the camera whether
stretched), 40 centimeters width, and eighty centimeters they get stuck in the wreck. An encoder is employed to
height including its manipulator. The front arms, gain robot traveling data which is used in cooperate
themselves, are 40 centimeters long, while the rear with the laser scanning data to generate a higher
arms are 27 centimeters in length. The radiuses of each accuracy map.
arm is 270 centimeters. Tehzeeb is able to climb over

1831
B. System Architecture TABLE I
HARDWARE DEVICES
This subsection describes the system architecture of the
Components Brand and Version
Tehzeeb robot (Fig. 5). A PIC micro-controller is used Servo Motor GWS S666/STD
to control and interact with the manipulator, heat DC Motor Tormax
sensors and carbon dioxide detectors, as a low-level Microcontroller PIC 18F2331
controller. The PIC micro-control, then, communicates ARM 7 (LPC 2103)
to the master controller, where an ARM7 controller is Encoder Yaskawa model 200 ASKS 5VM
Compass ADX-CMPS03
used. The ARM7 master controller is also responsible
Accelerometer Parallax Memsic 2125
for controlling and interacting with several devices, Motor ZGB70-60SRZ-1
such as, front and rear arms, robot driving system, Heat Sensor Thermopile Sensor SMTIR 990 2S1L
accelerometer, electronic compass and encoders. The
Carbon dioxide TGS 4161
master controller, then, connects to an on-board laptop Detector
via USB port. Laser Range Finder Hokuyo URG-04LX
USB Cameras Logitech QuickCam Pro 9000
Note book Compaq

Secondly, the Map module is responsible for


communicating with the Hokuyo module (a laser range
finder), which is an interface to send commands and
receive responses from the Hokuyo laser scanner. Raw
data obtained from the Hokuyo class is filtered and
processed to gain the environment that is described in
the world coordinate by this class before transferred to
the main module and displayed.

Fig.5 The robot’s System Architecture

The laptop takes care of data processing gains from


devices. The communication between the robot and
control base is done through the laptop’s wireless
system. Table 1 lists the hardware devices which are
used in the robotic system.

C. High-Level Software Architecture

Fig. 6 illustrates the robot’s software architecture. The


software consists of seven packages: Hokuyo,
Compass, Front Camera, Sensors, Robot State
Fig. 6 The robot’s Software Architecture
Information and Window Form Application. The main
module that controls other classes is the Window Form For the Compass module and the Sensors class, they
Application module. This module communicates with have the same duty, which is to connect to their
the Map module to obtain data to draw a map and it hardware and retrieve data to display on screen.
also retrieves and keeps angles that the robot rotates However, the Camera is not responsible for display a
from the Compass module. Besides this, the front stream video as many people might assume from its
camera position and direction is controlled by the Form name; it only receives the order from the main module
module via the Camera module. After gaining all to control the position of the camera. For displaying the
necessary information (data from all sensors, stream, we apply software and run them separately and
measurements from a compass, map data), the Window concurrently. This is because it reduces time processing
Form Application module will display all the of our program. Finally, instead of sending data via
information it attains on its display window. arbitrary wireless communication and processing them
at the remote station, we decide to process all data on

1832
the on-broad computer and apply a remote-desktop to Polar and Cartesian coordinate to represent landmarks
demonstrate the laptop screen on the monitor of the gaining from this relatively maximum twice derivative
computer station. value feature.)

x = step, y = distance
III. MAP GENERATION
dy
The information about distances and angles to f (x) = 2 u
dx
obstacles obtained from the Hokuyo laser scanner is
crucial for constructing maps. The strategy that we
apply to complete this is alternative and
comprehensive; it is comprised of an algorithm to
select and recognize landmarks from each scan, a
procedure to match landmarks to the previously Fig. 7 The robot moves straight forwards through the three
collected ones, and a method to localize itself while junctions
exploring an unseen region.

A. Method and Algorithm

We utilize a simple and straightforward method to


select landmarks from each scan as we define that
landmarks are points which own the significant and
outstanding features exposed in graphs plotted in polar
coordinate; however, we have to filter data before
apply this procedure because of the vibration of the Fig. 8 A graph plotting in the Polar Coordinate representing
scanner. landmarks

1) Landmark Location Evaluation

Noticeable and different features are used to determine


landmarks and match them to the existing ones
(landmarks from the previous scan). Those features
expose from plotting a graph in polar coordinates;
graphs will be distinct according to the real
environment.

The first feature is a relative maximum distance; we


can find such points in the distance-step graph. This
feature represents a corner or informs that there will be
free space if after that peak, the graph drops to zero.
(Fig 7. illustrates the robot travelling through a three Fig. 9 A graph plotting in the Cartesian coordinate
junction. Fig. 8 and Fig.9 present graphs plotted in
Polar and Cartesian coordinate to represent landmarks
gaining from this relatively maximum distance feature.)

The second feature is obtained from a graph of steps


and derivatives of distance by step multiplied by two.
For this feature, a landmark is a point that has the
relatively highest twice derivative value. This feature
represents corners of real-world junctions if there are
two corners around a junction, lasers scanner will find Fig. 10 The robot moves straight forwards through the three
two points that have relatively highest double junctions
derivative value. (Fig 10. illustrates the robot travelling
through a three junction in the different direction from
Fig. 7. Fig. 11 and Fig. 12 depict graphs plotted in

1833
IV. EXPERIMENTAL RESULTS
Our robot was demonstrated and we observed its
performances in five perspectives during participating
in a robot competition, “Thailand Rescue Robot
Championship 2007” (the arena for this competition
was presented in the Fig. 13.) The robust mechanical
structure and mobility of the robot was examined by
letting it travel along various paths: a rough and
dangerous step field, which is risky for the robot to
Fig. 11 Graphs of distances and steps plotted in the Polar
coordinate and graphs of double derivatives and steps displaying break and out of order (as illustrated in Fig. 14), and
landmarks at their peaks. the smoother region. While the robot was surveying,
the environmental information was used to determine
path and to search for victims, so cameras and sensors
were tested. The software for robot’s control, robot’s
status display, and map generation were monitored.
The experiment indicated that the mechanical structure
of our robot was not robust enough in the severely
uneven environment, so material and design were
necessary to be changed and improved. The software
for map generation was also incomplete and essential
to be added a module to cope with the error
accumulation.

Fig. 12 Graphs of distances and steps plotted in the Cartesian


landmarks at their corners.

These two features will be applied to select landmarks


from each scan and used to match landmarks from the
previous scan to the current one. We, firstly, have to
check whether the sequences of the landmark features
obtained from both rounds is the same, if so, then we
check whether gaps between distances of the
Fig. 13 The arena for Robocop Rescue Robot League
previously-scanned step and of the currently-scanned
step is acceptable. If it is not too large, those points are
located in the same positions in the world coordinate.

After we achieve matched-landmark pairs, we will


apply it to figure the translation and the rotation of the
robot by utilizing geometry theory and vector property;
then the translation vector will be used to calculate the
angle of rotation by solving the transformation equation
which converts the currently-scanned landmark to the
similar previously-scanned one. With this information
(the position vector, and the angle of rotation), we then
can form a homogeneous transformation matrixes
which will use to modify the local description of
environment to the world coordinate. Ultimately, with
the environment described in the world coordinate, we
can complete plotting a map.
Fig. 14 the robot on random step field

1834
V. DISCUSSION [3] A. E. M. Vargas, K. Mizuuchi, D. Endo, E. Rohmer, K.
Nagatani, and, K. Yoshida, “Development of a Networked Robotic
To be a complete mobile rough terrain robot, we have System for Disaster Mitigation –Navigation System Based on 3D
to modify our robot mainly in two major parts. Firstly, Geometry Acquisition-,” Tohoku University, Japan
its mechanic designs and implementation have to adjust [4] A. Birk , H. Kenn, S. Carpin , and, M. Pfingsthorn, “Toward
to make it more robust in difficult environment. Autonomous Rescue Robots,” First International Workshop on
Secondly, the software that is responsible for map- Synthetic Simulation and Robotics to Migrate Earthquake Disaster,
generating and path-planning have to improve and add July 5th, 2003.
other procedures to make the robot become truly [5] I. Vincent and M. Trentini, “Shape-shifting Tracked Robotic
autonomous. Vehicle for complex terrain navigation,” Defence R&D Canada,
Technical Memorandum, DRDC Suffield TM 2007-190, December
REFERENCES 2007
[1] R. Sheh, “The Redback: A Low-Cost Advanced Mobility
Robot,” the University of New South Wales, Sedney, Australia [6] A. Birk , H. Kenn, “A Rescue Robot Control Architecture
ensuring Safe Semi-Autonomous Operation,” , {Robocup}-02:
[2] M. Montemerlo, S. Thurn, D. Kooler, and, B. Wegbreit, Robot Soccer World CupVI, Springer, INIST-CNRS, Cote INIST,
“FastSLAM: A Factored Solution to the Simultaneous Localization February, 2004
and Mapping Problem,” Proceedings of the AAAI National
Conference on Artificial Intelligence, 2002. [7] J. Pellenz, “RoboCup 2008- RoboCupRescue Team
resko@UniKoblenz”, University of Koblenz-Landua, Germany,
2008

1835

You might also like