EVALUATION OF HAPTIC FEEDBACK METHODS FOR
TELEOPERATED EXPLOSIVE ORDNANCE DISPOSAL
ROBOTS
by
Alex J. Burtness
An essay submitted to The Johns Hopkins University in conformity with the
requirements for the degree of Master of Science.
Baltimore, Maryland
January, 2011
c Alex J. Burtness 2011
All rights reserved
Abstract
This thesis reports on the effects of sensory substitution methods for force feedback
during teleoperation of robotic systems used for Explosive Ordnance Disposal (EOD).
Existing EOD robotic systems do not feature any type of haptic feedback. It is
currently unknown what benefits could by gained by supplying this information to
the operator. In order to assess the benefits of additional feedback, a robotic gripper
was procured and instrumented in order to display the forces applied by the end
effector to an object. In a contact-based event detection task, users were asked to
slowly grasp an object as lightly as possible and stop when a grasp was achieved.
The users were supplied with video feedback of the gripper and either (1) no haptic
feedback, (2) surrogate visual feedback, or (3) surrogate vibrotactile feedback. The
force information came exclusively from the current being used to drive the gripper.
Peak grasp forces were measured and compared across conditions. The improvements gained from vibrotactile over no haptic feedback feedback were statistically
significant and reduced the threshold at which event detection took place from an
average of 8.43 N to an average of 5.97 N. Qualitative information from the users
ii
showed a significant preference for this type of feedback. Vibrotactile feedback was
shown to be very useful, while surrogate visual force feedback was not found to be
helpful quantitatively nor was it preferred by the users. This feedback information
would be inexpensive to implement and could be easily added to existing systems,
thereby improving their capabilities to the EOD technician.
Primary Reader: Professor Allison Okamura
Secondary Reader: Dr. Matthew Kozlowski
iii
Acknowledgments
I would first like to thank my advisors, Professor Allison Okamura, Dr. Matthew
Kozlowski, and Stuart Harshbarger for their support, patience, and guidance through
this process.
I would like to thank the engineers and technicians at Naval Explosive Ordnance
Disposal Technology Division (NAVEODTECHDIV). Specifically at TECHDIV, I
would like to thank Dr. Kurt Hacker who made the initial arrangements for me to
begin this work. Additionally, I would like to thank Byron Brezina for providing valuable information on the development of robotic systems for EOD, and Rob Simmons
for information on underwater EOD technologies.
Thanks also to Stephen Phillips for providing resources and contacts, Robert
Armiger for supplying access to the Revolutionizing Prosthetics code repository, and
John Bartusek for helping to set up the HD-2 arm.
I would also like to recognize the US Naval Academy class of 2010 EOD Officers
for their willingness to serve their country in a such an important way during a time
of war. I’m finally finished boys. I’ll see you in Panama City. Hooyah to LT Eric
iv
Jewell for helping to guide America’s next generation of EOD officers.
I would like to thank all the members of the haptics lab who made it such a
welcoming place. Special thanks to Amy Blank for helping with statistics. Thanks
also to Joe and Kamini for helping to fight the good fight as robotics master’s students,
and my roommates and lifelong friends Justin Kramer, Eric Wittig, and Mike Head
for never once accusing me of “slacking off at Hopkins all day.”
I’d like to thank Jessica for her ever present support through three years at the
Academy, many late nights of graduate school, and the long separation that we have
ahead of us. Finally, I would like to thank my family for always nurturing my curiosity,
and putting up with all of the outlandish behavior and experiments that came with
it.
However, I save my highest acknowledgments for the men and women in our armed
forces who deploy overseas with the mission of rendering safe ordnance and improvised
explosive devices. These brave young men and women risk their lives every day in
order to protect our soldiers, sailors, marines, and airmen. They deserve nothing but
the best research and technology from academia and industry.
v
Contents
Abstract
ii
Acknowledgments
iv
List of Tables
x
List of Figures
xi
1 Introduction
2
1.1
Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
1.2
Prior Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
1.2.1
Prior Work in Haptic Feedback . . . . . . . . . . . . . . . . .
7
1.2.2
Prior Work in Explosive Ordnance Disposal Robotics . . . . .
10
1.3
Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
1.4
Organization
25
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Experimental System
27
vi
2.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
2.2
Input Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
2.2.1
Logitech Dual Action Gamepad . . . . . . . . . . . . . . . . .
30
Manipulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
2.3.1
Three-Jaw Gripper . . . . . . . . . . . . . . . . . . . . . . . .
32
2.3.2
Robotic Arm . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
2.4.1
Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
Feedback Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
2.5.1
Vibrotactor . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
2.5.2
Graphical Feedback System . . . . . . . . . . . . . . . . . . .
45
System Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
2.6.1
Microcontroller . . . . . . . . . . . . . . . . . . . . . . . . . .
47
2.6.2
Digital Video Camera
. . . . . . . . . . . . . . . . . . . . . .
49
2.6.3
Force/Torque Sensor . . . . . . . . . . . . . . . . . . . . . . .
50
2.6.4
Framework and Setup . . . . . . . . . . . . . . . . . . . . . .
53
2.3
2.4
2.5
2.6
3 Experiment
3.1
55
Preliminary Experiments . . . . . . . . . . . . . . . . . . . . . . . . .
55
3.1.1
Accelerometer Test . . . . . . . . . . . . . . . . . . . . . . . .
56
3.1.2
F/T Sensor and Current Sensor Test . . . . . . . . . . . . . .
57
3.1.3
Control Methodology . . . . . . . . . . . . . . . . . . . . . . .
59
vii
3.2
3.3
Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
3.2.1
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
4 Conclusions
68
4.1
Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
4.2
Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
4.2.1
Additional Experiments . . . . . . . . . . . . . . . . . . . . .
70
4.2.1.1
Sustained Force Experiment . . . . . . . . . . . . . .
70
4.2.1.2
Real-World Task . . . . . . . . . . . . . . . . . . . .
71
Further Areas . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
4.2.2
Bibliography
73
A Code
79
A.1 HapGui.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
A.2 PositionStep.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
88
A.3 PositionRamp.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
90
A.4 VelocityControl.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91
A.5 Forcebar.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
92
A.6 CalibrationRun.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93
A.7 Arduino Code - SerialReadWrite.pde . . . . . . . . . . . . . . . . . .
99
viii
B Data Sheets
101
B.1
VPM2 Vibrotactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
B.2
Arduino Duemilanove . . . . . . . . . . . . . . . . . . . . . . . . . . 105
B.3
Kistler Piezotron Accelerometer . . . . . . . . . . . . . . . . . . . . . 108
B.4
Smooth-On OOMOO Silicon Rubber . . . . . . . . . . . . . . . . . . 110
Vita
112
ix
List of Tables
2.1
2.2
2.3
2.4
2.5
2.6
3.1
3.2
3.3
Three-Jaw Gripper torque/velocity polynomial values for θ ∈ {0-200}
Three-Jaw Gripper torque/velocity polynomial values for θ ∈ {200-400}
Three-Jaw Gripper - torque/velocity identification raw data . . . . .
Three-Jaw Gripper - torque/velocity identification filtered data with
polynomial fit curves . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sensing range and resolution of forces for the ATI Mini45 . . . . . . .
Sensing range and resolution of torques for the ATI Mini45 . . . . . .
Average applied force from each user in Newtons . . . . . . . . . . . .
Table of statistical significant. (1) No feedback, (2) Surrogate Visual
Feedback, (3) Surrogate Vibrotactile Feedback . . . . . . . . . . . . .
Post-experiment survey average results. (1) - Very Easy, (2) - Easy,
(3) - Moderate, (4) - Hard, (5) - Very Hard . . . . . . . . . . . . . . .
x
38
39
40
41
51
51
64
65
65
List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
1.10
1.11
1.12
1.13
1.14
1.15
1.16
1.17
1.18
1.19
1.20
2.1
2.2
2.3
2.4
2.5
2.6
Foster-Miller TALON Robot . . . . . . . . . . . . . . . . . . . . . . .
Results from police survey regarding the ideal cost of robotic systems
for bomb disposal, reproduced from [1] . . . . . . . . . . . . . . . . .
EOD Teleoperator System . . . . . . . . . . . . . . . . . . . . . . . .
Mark I Wheelbarrow (1972) . . . . . . . . . . . . . . . . . . . . . . .
Mark II Wheelbarrow (1972) . . . . . . . . . . . . . . . . . . . . . . .
Mark III Wheelbarrow (1972) . . . . . . . . . . . . . . . . . . . . . .
Mark V Wheelbarrow (1973) . . . . . . . . . . . . . . . . . . . . . . .
Mark VI Wheelbarrow (1975) . . . . . . . . . . . . . . . . . . . . . .
UK MoD Buckeye . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Wheelbarrow OCU . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mark VII Wheelbarrow . . . . . . . . . . . . . . . . . . . . . . . . .
Remotec Mark VIII Wheelbarrow (1997) . . . . . . . . . . . . . . . .
Mark IX Wheelbarrow . . . . . . . . . . . . . . . . . . . . . . . . . .
Remotely Operated Vehicle for Emplacement and Reconnaissance . .
Remotely Actuated Mobile Platform for Render Safe and Disposal . .
Remote Control EOD Tool and Equipment Transporter . . . . . . . .
Semi-Autonomous Mobile System for Ordnance Neutralization . . . .
Remote Ordnance Neutralization System . . . . . . . . . . . . . . . .
iRobot PackBot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hydroid Remus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Basic teleoperation control loop . . . . . . . . . . . . . . . . . . . . .
Logitech Dual Action Gamepad . . . . . . . . . . . . . . . . . . . . .
Contineo Robotics Three-Jaw Gripper . . . . . . . . . . . . . . . . .
Mapping of motor encoder counts to gripper position [25, 75, 125, 175,
225, 275, 325, 375] . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Graph displaying the prevention of extrapolation error by adding a
horizontal asymptote . . . . . . . . . . . . . . . . . . . . . . . . . . .
Northrop Grumman HD-2 Manipulator . . . . . . . . . . . . . . . . .
xi
3
6
12
13
14
14
15
15
16
16
16
16
17
18
18
19
20
21
22
23
28
31
33
35
37
43
2.7
2.8
2.9
2.10
2.11
2.12
2.13
Mounted Kistler Accelerometer . . . . . . . . . . . . . . . . . . . . .
Mounted Vibrotractor . . . . . . . . . . . . . . . . . . . . . . . . . .
Graphic Feedback System . . . . . . . . . . . . . . . . . . . . . . . .
Arduino Duemilanove . . . . . . . . . . . . . . . . . . . . . . . . . . .
Logitech Quickcam . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ATI Mini45 Force/Torque Sensor . . . . . . . . . . . . . . . . . . . .
Grasping object instrumented with the ATI Mini45 F/T Sensor - Pen
for scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.14 System Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1
3.2
3.3
3.4
3.5
3.6
Accelerometer test output showing three separate grasps, noted in red,
of the instrumented object . . . . . . . . . . . . . . . . . . . . . . . .
F/T output during four successive grasps of the instrumented object .
Corresponding GUI output during four successive grasps of the instrumented object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The setup of the gripper and instrumented object during the experiment
Plot of mean peak forces applied to the instrumented object, averaged
for all subjects and all trials for each condition . . . . . . . . . . . . .
Post-experiment survey ratings . . . . . . . . . . . . . . . . . . . . .
xii
44
45
46
48
49
52
53
54
56
58
59
62
64
66
Dedicated to our Nation’s fallen EOD warriors
Chapter 1
Introduction
1.1
Motivation
Since the start of the Global War on Terror in 2001, 5,777 United States service
members have been killed in overseas operations. Another 41,030 have been wounded
in action [2]. It has been estimated that roadside bombs, Improvised Explosive Devices (IEDs), and suicide car bombs have accounted for 50% of the casualties in
Afghanistan and 60% in Iraq [3].
In addition to the threat faced by those in the military, over 100 million land
mines are currently planted around the world, causing between 15,000 to 20,000 civilian casualties per annum in addition to the countless injuries caused by unexploded
ordnance (UXO) [4]. Prior to 2001, there were over 1,000 causalities annually in
Afghanistan alone, making it the country with the highest fatality rate due to land
2
1.1. MOTIVATION
CHAPTER 1. INTRODUCTION
Figure 1.1: Foster-Miller TALON Robot
mines and UXO. The overwhelming majority of those casualties were civilians [4].
Explosive threats pose a serious danger to both militaries and civilian populations
who live and work in areas where land mines and UXO are abundant. These threats
are dealt with by civilian bomb disposal units and military EOD units. These units
have used robotic systems since the 1970s in order to render safe explosive threats from
a distance, saving countless lives. In the U.S. Navy alone, 200 Man Transportable
Robot Systems, shown in Figure 1.1 from [5], have been destroyed since 2001, each
an instance where a technician might otherwise have been injured [6]. However,
these systems are fairly rudimentary when compared to some of the high-performance
teleoperation systems used in other applications such as minimally invasive surgery,
maintenance in space and hazardous material handling.
3
1.1. MOTIVATION
CHAPTER 1. INTRODUCTION
The systems currently in use tend to command robots in joint space using velocity
control toggle switches. Due to reliability and computational constraints, no currently
fielded EOD robots use Cartesian or master-slave control. Visual feedback is given
to the user on the Operator Control Unit (OCU) from the onboard camera. Some
systems display an output of the pose of the robot. A high level of skill is needed in
order to efficiently control these types of robots, as the operator has to “learn” the
robot’s inverse kinematics and Jacobian matrices. This heavy mental workload is one
of several reasons that EOD robots tend to have relatively few degrees of freedom
(DOF).
Additional constraints exist, including the need to be compact in size in order
to maximize access to confined areas. Varying conditions and hazardous work also
put a premium on the need for low-cost maintenance, which also tends to encourage
the fielding of low-DOF systems. These systems are also significantly limited in the
feedback given to the user. Current systems lack any type of kinesthetic or tactile
feedback. At best, information on applied forces must be inferred from auditory
information from the motors and internal models of the effects of system inputs.
This final limitation is a significant one, given that a great deal of the work being
done is delicate in nature. The actions of accessing an explosive device, rendering it
safe, and gathering evidence afterwards could be greatly influenced by the addition
of haptic feedback to the operator.
While many of these limitations could be overcome with a substantial increase in
4
1.1. MOTIVATION
CHAPTER 1. INTRODUCTION
spending, there is a major incentive to keep the cost of these systems low. While
industrial robots frequently attain a mean time between failure (MTBF) of 50,000
hours or more [7] [8], the average EOD robot has a MTBF of only 6 to 20 hours [9].
While this number would be excessively low in any field, in Explosive Ordnance
Disposal, failures tend to be catastrophic ones.
In addition to issues related to reliability and the hazards of the environment, there
is a significant disparity in the level of technology used to create explosive threats
and that which is used to dispose of it. As an example, a typical land mine costs
between $3 and $30 [10]. Costs to remove land mines average around $800 per land
mine, in addition to the potential cost of human life for those who remove them [4].
Likewise, many IEDs can be constructed with exceptionally inexpensive materials, as
unexploded ordnance tends to be readily available. Robotic systems, while varying
significantly in price, are invariably several orders of magnitude more expensive than
the threats they seek to neutralize [11].
Results from civilian police departments [1], seen in Figure 1.2, indicate that the
ideal cost for a robotic system should be under $40,000. This is likely a function
both of the likely catastrophic failure rate of the robot and the relatively limited
funding available to bomb disposal units. While this cost may be unattainable given
the necessary capabilities of an effective EOD robotic system, it is a testament to the
importance of cost minimization.
Efforts to overcome current robotic limitations must be constantly cognizant of
5
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
Figure 1.2: Results from police survey regarding the ideal cost of robotic systems for
bomb disposal, reproduced from [1]
the cost involved in doing so. While performance and reliability should always be
maximized, the system should ultimately be expendable.
With these considerations in mind, there is a significant need to develop costeffective methods to display haptic information to the user.
1.2
Prior Work
This research builds on previous work from two very different areas: haptic technologies for telemanipulation, and robotic systems for Explosive Ordnance Disposal.
6
1.2. PRIOR WORK
1.2.1
CHAPTER 1. INTRODUCTION
Prior Work in Haptic Feedback
Haptics refers to the sense of touch, and haptic technology invokes devices and
software that displays haptic information to users in virtual and teleoperated environments. Haptic feedback is often described as cutaneous (tactile feedback, related
to the skin) or kinesthetic (force feedback, related the muscles and joints). The development and efficacy of haptic feedback for teleoperation in various applications is
relevant to the research described in this essay.
Some of the earliest haptic feedback systems were designed for teleoperation in
hazardous environments, particularly manipulation of radioactive materials and later
for space robots and surgery [12]. Originally, haptic feedback to the user was produced
due to a direct mechanical connection between the “master” device and the remote
“slave” robot. Then, as master and slave devices were physically disconnected and
controlled “by wire”, numerous control schemes invoking sensors on the slave and
actuators on the master were developed to enable haptic feedback.
Much of the research in haptic feedback for teleoperation has focused on highperformance, low-impedance devices operating in a bilateral mode. That is, force
and motion information are exchanged between the master and the slave. Challenges
in bilateral teleoperation include maintaining stability and transparency in light of
uncertainty in the dynamic models of the human operator, and time delays. Stability
for teleoperators can be defined as bounded system inputs resulting in bounded system outputs. Transparency is the ability of a teleoperator to make the user feel as if
7
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
he is directly manipulating a remote environment, rather than through a teleoperator.
Supervisory and shared control are methods of overcome delays and increasing performance without requiring the human constantly in the loop, but lack transparency.
In addition, wave variables have been used in bilateral teleoperators to eliminate the
destabilizing effects of lag.
While direct haptic feedback based on bilateral teleoperation will likely be useful
in EOD systems in the years to come, methods such as sensory substitution are
much more applicable to situations requiring robustness in challenging operational
environments. The specifics of the EOD environment require lower cost, more robust,
solutions to haptic displays than the high-fidelity bilateral systems being developed
for other applications. Many of the benefits of sensory substitution methods for force
feedback were shown by M. Massimino in [13]. These include the ability to display
to the user small changes in forces, and the lack of issues with instability. For tasks
involving detecting contact, sensory substitution out-performed kinesthetic feedback
as it allowed the users to sense smaller forces. It was also found that tactile displays
were effective because they did not overload the subjects’ visual system, nor did they
induce operator movement or instability.
Sensory substitution methods have seen significant interest recently, due to their
potential applcication in robot-assisted surgical systems. Gwilliam et al. [14] used the
da Vinci Surgical System to detect calcified arteries by means of palpation. Results
showed graphical feedback of forced increased user performance of both experienced
8
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
and novice users over no haptic feedback, while direct force feedback (to the user’s
hands, via the master manipulator) increased user performance only among experienced users. Likewise, Kitagawa, et al. [15], [16] used the da Vinci to perform suturing
tasks and used visual and auditory sensory substitution to display forces to the user.
Reiley et al. [17] expanded further proved the effectiveness of visual feedback of force
information in improving suture tying with an surgical robot.
While most research using surgical systems has focused on visual sensory substitution of force information, there has also been some work developing and evaluating
vibrotactile feedback. In [18], the authors develop a vibrotactile feedback system in
which vibrations were applied to a subjects foot. They showed that a linear increase
in vibration intensity is perceived as a linear increase in force and that the system
improved a user’s ability to differentiate tissue softness.
Relevant work has also been done in using vibrations for event detection, an
important part of telemanipulation using direct, shared or supervisory control. In [19],
accelerations were measured on the slave robot and fed back to the user via a vibrating
device. Using both context and sensor-based data, event detection can be done with
a very high degree of certainty, given an array of sensors to measure the full state
of the robotic system [20]. In [21], the stability and robustness of this technique is
increased with the addition of smooth phase transitions between events.
In this research, we estimate force applied by the slave robot (the gripper of an
EOD robot) on the environment, and display the sensed information via sensory
9
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
substitution. The sensor substitution methods were consider are a visual bar graph,
similar to Kitagawa, et al. [15], [16] and vibration feedback via pager motors attached
to the master device (a game controller).
1.2.2
Prior Work in Explosive Ordnance Disposal
Robotics
Technological innovation has long played an important role in Explosive Ordnance
Disposal. During World War II, the Research Department of the US Navy Bomb
Disposal School [22] and their counterparts in the United Kingdom, the Unexploded
Bomb Committee [23], made remarkable improvements to the technologies available to
EOD technicians and Ammunition Technical Officers (ATOs). Many of the solutions
that they came up with could not be tested in laboratory conditions, so these groups
spent significant amounts of time in the field working on live ordnance [22] [23].
A few of the many innovations that these two groups devised during World War
II are listed below:
• Acid Trepanning - A nitric acid solution applied to the steel bomb case in a fine
spray to cut a hole in a piece of ordnance. No undesirable effect upon hitting
the main charge.
• Freezing Technique - Lowering the temperature of the fuze until the dry cell of
the battery no longer produces a current. Freezes the mercury globule in the
10
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
mercury tilt switch. Frozen using a dry ice/ alcohol slush.
• Plaskon Resin Injection - Attack on mechanical fuzes by inserting a quick hardening resin [22].
• Magnetic Clock Stopper - A large electromagnet fixed to the side of the bomb
through which high current was passed. The resulting magnetic field stopped
the ticking of mechanical clocks while it was in place.
• Mine Locater - Early metal detector.
• Fuze Extractor No. 1 (Freddy) - Frame, pneumatic jack, an extractor rod, and a
discharger. Used a CO2 cartidge which raised the extractor rod when pierced.
Because there were several inches of play before the fuse was extracted, the
ATO had several minutes to distance him/herself.
• Radiography - Early X-Ray technology with an adjustable frame which could
be fitted to bombs of varying circumferences [23].
While many of these advancements certainly saved lives, distance is the only
factor that can truly keep an EOD technician safe. Because of this fact, one of the
most basic tools that the EOD technician uses is the hook and line, which is an
extremely low-tech means to manipulate an object from a distance. In many ways,
teleoperated robots have been developed as an extension of this simple solution. Since
their inception, robotic systems have been used extensively as a way to render safe
explosive threats while maintaining the safe distance of the technician.
11
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
Figure 1.3: EOD Teleoperator System
In the United States, the idea of using robotic systems for EOD was first explored
in the 1960s [24]. The EOD Teleoperator System (Figure 1.3, reproduced from [24])
was developed by the EOD Robotics Program and consisted of a master-slave manipulator mounted on a six wheeled vehicle. However, this system was found to be
infeasible for EOD use due to its complexity [24].
As a result, the primary development of early fielded EOD robotic systems took
place in the United Kingdom. Because of conflicts in Northern Ireland, there was an
immediate need to “attach a hook to a car bomb to allow the vehicle to be towed away
to a site where it could be safely destroyed. All too often the process of attaching the
towing hook triggered the explosion – killing the ATO” [25].
Because of this, Lt. Col. Peter Miller of the Royal Army Ordnance Corps was
asked to devise a solution. Miller retrofitted a battery-operated three–wheeled wheelbarrow chassis with a spring loaded hook on a boom to latch underneath a suspect
12
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
Figure 1.4: Mark I Wheelbarrow (1972)
car [25]. The controls of this device consisted of four nylon lines. Two steered the
front wheel of the device, another reversed the direction of the motor, and the last engaged the spring loaded hook. Both the controls of the robot, and its intended effects
were modeled after line and hook methods used by EOD technicians for decades [26].
This design was simple; it was invented, designed, and put into production in
22 days. Named the Wheelbarrow (Figure 1.4), after the platform on which it was
created, it was immediately fielded on the front lines in Northern Ireland [26]. Figure
1.4 and all other Wheelbarrow figures are reproduced from [26], unless otherwise
noted.
Each failure of a Wheelbarrow was referred to Lt. Col. Miller to solve. As such,
several significant improvements were made to the system over a relatively short
period of time. The first improvements made to the Mark I, shown in Figure1.5 were
the addition of a second motor to control the steering of the vehicle and a boom that
allowed it to drop explosive charges into suspect cars.
13
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
Figure 1.5: Mark II Wheelbarrow (1972)
Figure 1.6: Mark III Wheelbarrow (1972)
The Mark III (Figure 1.6) added additional linear actuators which turned the
static boom into a robotic manipulator, albeit a simple one. Additionally, an improved
chassis was used with a fourth wheel to provide greater stability to the system. Closed
circuit cameras were added to a later iteration of the Mark III, as were clamps to
hold explosive disrupters [26].
The Mark IV and V (Figure 1.7) saw significant improvements to the kinematic
design of the Wheelbarrow, in addition to an improved electronics system. Over
the course of two years, Miller and his team produced 22 Mark V’s in addition to
a handful of each of the earlier iterations of the system. By November of 1973, the
Wheelbarrow had been used operationally more than 100 times [26].
14
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
Figure 1.7: Mark V Wheelbarrow (1973)
Figure 1.8: Mark VI Wheelbarrow (1975)
Research and development of the Wheelbarrow was taken over by Remotec, Inc.
in 1976 and they started to market the Wheelbarrow worldwide. They produced the
Mark VII (Figure 1.11) later that year. The purpose of the wheelbarrow has typically
been reconnaissance and disruption, much like other early EOD robotic systems such
as the UK Ministry of Defense Buckeye, shown in Figure 1.9, reproduced from [26].
Manipulation did not become a major goal for the platform until much later systems
such as the Mark IX (Figure 1.13, reproduced from [27]).
The Wheelbarrow is operated by an Operator Control Unit (OCU), shown in
Figure 1.10, with toggle switches which control the direction of each joint individually.
A separate gain knob controls the speed that each joint moves when commanded.
Parallel to these developments, the United States continued to develop robotic
systems for Explosive Ordnance Disposal. Following the EOD Teleoperator System,
15
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
Figure 1.9: UK MoD Buckeye
Figure 1.10: Wheelbarrow OCU
Figure 1.11: Mark VII Wheelbarrow
Figure 1.12: Remotec Mark VIII Wheelbarrow (1997)
efforts were made to develop smaller, low cost robotic technologies. The first of
these developments was the Remotely Operated Vehicle for Emplacement and Reconnaissance (ROVER) [24]. At $10,000, the ROVER (Figure 1.14) was a low-cost
cable-controlled robotic system. All remaining figures in this chapter are reproduced
from [24] unless otherwise noted.
On board, the ROVER had a video camera, simple manipulator, and an interface
to fire EOD disrupter tools. Despite its communications and power tether, its portable
16
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
Figure 1.13: Mark IX Wheelbarrow
battery pack limited it to an operational endurance of two to four hours. Serious
additional limitations were found in the ROVER system and it was discontinued
in the mid 1980s. Although it was never operationally fielded, it was a significant
learning experience for the EOD community as it demonstrated the efficacy of lowcost, low-DOF robotic systems. Subsequent robotic systems tended to be more akin
to the ROVER than the EOD teleoperator system.
As a follow-on to the ROVER, the Remotely Actuated Mobile Platform for Render
Safe and Disposal (RAMROD) was developed [24]. The RAMROD, shown in Figure
1.15, was similar in form and cost, but was designed to be weather resistant, field
serviceable, and to be able to climb stairs. Similar to the ROVER, shortcomings in
the system, as well as the existence of potentially more capable commercially available
17
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
Figure 1.14: Remotely Operated Vehicle for Emplacement and Reconnaissance
Figure 1.15: Remotely Actuated Mobile Platform for Render Safe and Disposal
systems, prevented the RAMROD from ever being fielded operationally.
The RAMROD program transitioned into a new effort which resulted in the Remote Control EOD Tool and Equipment Transporter (RCT) [24], shown in Figure
1.16. After many years of development, this was the first robotic system to actually
be used by troops. While its use overseas was limited to the Gulf War, it was found to
be an effective means of dealing with IED threats. However, its effectiveness against
18
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
Figure 1.16: Remote Control EOD Tool and Equipment Transporter
conventional ordnance was minimal and its overall unit cost was over $600,000. It
was used by all of the services until it was replaced by the Remote Ordnance Neutralization System (RONS).
The morphology of the RONS (Figure 1.18) is very similar to that of its predecessor, although its feasibility was first proven by the Semi-Autonomous Mobile
System for Ordnance Neutralization (SAMSON) [24]. The SAMSON (Figure 1.17)
featured the first 6-DOF manipulator arm to be used on an EOD robot. Additionally, it demonstrated the capability of end effector tool exchange, and more advanced
manipulation. The RONS, fielded in 1999, built on lessons from the SAMSON and
proved capable of assisting EOD technicians in more aspects of the mission than any
previous system. The RONS remains in use by all services, with over 320 robots having been produced. It is used most frequently by Air Force EOD technicians because
of their specific mission set [28].
19
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
Figure 1.17: Semi-Autonomous Mobile System for Ordnance Neutralization
Much as The Troubles in Northen Ireland provided the imperative to make robotic
systems an essential part of the UK EOD tool kit, so did the Iraq War have a significant impact on the role of robotic systems in EOD in America. In both of these
conflicts, EOD was at the front lines, and IEDs and car bombs were the weapon of
choice. In the UK, this environment led to the creation of the Wheelbarrow. In the
US, ongoing efforts to develop a Man Transportable Robot System (MTRS) resulted
in the fielding of a combined 3,000 QinetiQ Talon (Figure 1.1) and iRobot PackBot
(Figure 1.19, reproduced from [29]) robots from 2005 to present [28]. Each MTRS
costs roughly $140,000, has relatively few degrees of freedom and almost no autonomy.
However, both systems perform very well in extreme environments and are optimized
for the rigors of field work.
While the MTRS has improved considerably in terms of reliability, survivability,
and capabilities from their predecessors, the controls and user interface for these
20
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
Figure 1.18: Remote Ordnance Neutralization System
systems look remarkably similar to those of the earliest EOD robots. The output
of a closed-circuit television camera, easily identifiable on the RAMROD, SAMSON,
PackBot, and Talon, is displayed on a small screen of an operator control unit. While
newer systems have multiple cameras and some advanced optics technology, the visual
display is the only feedback given to the operator.
The size of the OCU increased with later systems, as can be seen with the RONS
OCU. This trend was reversed with the MTRS, both of whose controllers are similar
in size to a large brief case. The user input on these OCUs are almost universally
velocity control toggle switches, with a gain dial to adjust the speed of the joint being
moved. The Talon and PackBot departed from this slightly by using continuous
input joysticks and “intuitive” hockey puck-sized paddles respectively. Both models
can now be controlled with a standard size video game controller which maps each
joystick axis to a joint on the robot.
21
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
Figure 1.19: iRobot PackBot
A significant portion of the US EOD mission is conducted underwater in combating
both naval mines and sunken ordnance. In order to assist in this mission, unmanned
underwater vehicles (UUVs) such as the Hydroid REMUS (Figure 1.20, reproduced
from [30]) are employed. The REMUS is a 5 ft long, 80 lb submersible that can
operate at depths up to 100 ft and is equipped with a large array of sensors for
navigating in the water column and locating ordnance.
Due to the constraints of the underwater environment, unmanned underwater systems are employed in manpower intensive operations such as broad area surveillance.
Allowing UUVs to take over this slow, intensive work reduces risk to EOD technicians and allows them to focus on intelligence gathering and render safe procedures
on ordnance [31].
Current systems fielded by the US Military for underwater EOD operations lack
22
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
Figure 1.20: Hydroid Remus
any manipulator and instead focus on intelligence, surveillance, and reconnaissance.
While manipulation will likely be a goal in future systems, the largest focus for
improvement on these systems is in more capable sensors and increased autonomy
and power [31].
There has been some work in academia to develop robotic systems for EOD. Due
to the small number of EOD technicians, and hence EOD robots, the results from
the majority of studies developing EOD robotic systems have not been implemented,
expanded upon, or seen significant citation.
In [32] a system is devised where a large number of low-cost robots execute a
Pick Up and Carry Away (PUCA) mission to combat cluster ordnance. The relative
benefits of exhaustive and random searches are examined as well as the importance
of multiple drop off points. Further development of this system in [33] emphasizes
23
1.2. PRIOR WORK
CHAPTER 1. INTRODUCTION
the importance of low-cost, performance, and simplicity.
Several efforts have been made to create robots for demining. In [34] a low-cost,
light weight system for demining is developed. The study lacked significant evaluation
of the robotic system and noted that the cost of the robot, at around $6000, was still
an order of magnitude greater than hoped. In [35], sensors are determined to be the
greatest limiting factor in creating effective robotic solutions to demining.
iRobot developed a system for kinesthetic gripper force feedback on the PackBot robot in [36]. Forces were displayed to the user with a modified Novint Falcon
interface. Results from this study indicated increased performance of delicate manipulation tasks with haptic feedback, but tasks times tended to increase as well.
Additionally, the study noted that user performance decreased significantly when
using the Falcon without force feedback.
In [37], an impedance-controlled bimanual system for EOD with virtual fixtures to
prevent self-collision was proposed. This system was used to satisfactory results, but
with significantly increased task completion time over the manual case. Additional
work to make this robotic system robust and mobile did not occur.
While other work has taken place to develop robotic systems for EOD, they have
primarily been demonstrative and have not significantly influenced fielded systems.
24
1.3. THESIS CONTRIBUTIONS
1.3
CHAPTER 1. INTRODUCTION
Thesis Contributions
This thesis describes the following contributions:
• To the best of the author’s knowledge, the first systematic development and
assessment of a sensory substitution haptic feedback system for a teleoperated
Explosive Ordnance Disposal robot
• A detailed examination of the relative benefits gained from low-cost feedback
solutions when applied to grasping tasks
• Experimental evidence demonstrating improved event detection with haptic
feedback
1.4
Organization
This thesis is organized into several chapters following this introduction. First,
Chapter 2, describes the various pieces of the physical, electromechanical and software
systems that were used for the experiments, with particular focus on the integration of
these components. This chapter describes the input devices, manipulators, feedback
devices, and system integration tools used.
Next, Chapter 3 gives a detailed explanation of the experiment that was conducted, including a defined protocol. Then the data from the experiment is presented,
annotated, and followed by statistical analysis.
25
1.4. ORGANIZATION
CHAPTER 1. INTRODUCTION
The thesis concludes in Chapter 4 with a discussion of the contributions of the
research and the areas of future work. Following this conclusion, documentation and
code are attached as appendices.
26
Chapter 2
Experimental System
2.1
Overview
Teleoperated robotic systems put the human operator into the control loop of the
robot. In all currently fielded Explosive Ordnance Disposal systems, the operator
gives velocity commands to the robot in joint space using toggle switches or joysticks.
The operator is provided with a live camera feed through the Operator Control Unit
(OCU).
In order to improve the usefulness of the telepresence, information about applied
forces can also be displayed in order to better provide the user with information with
which to make decisions about subsequent commands to give the robot.
For this control loop (Figure 2.1) to be realized, several interworking pieces must
be implemented. First, a robotic system must be selected to which the operator can
27
2.2. INPUT DEVICE
CHAPTER 2. EXPERIMENTAL SYSTEM
Figure 2.1: Basic teleoperation control loop
give commands. For EOD robots this must include a mobile platform, a manipulator arm and an end effector tool or gripper for interacting with the environment.
Additionally, a method must exist for the operator to give commands to the robot.
Finally, both visual feedback systems and haptic feedback systems must be designed
in order to close the loop.
2.2
Input Device
After examining input devices that are currently used in EOD robots, a video
game controller was selected as the single input device used to give commands to the
robot. This input device is currently used on MTRS systems as an improvement on
its standard interface. Initial plans for this research hoped to generalize these findings by examining several different input devices, but ultimately, time and resources
prohibited this.
While using a single input device does not invalidate of the findings of this research,
28
2.2. INPUT DEVICE
CHAPTER 2. EXPERIMENTAL SYSTEM
examining multiple input devices is particularly important for haptic feedback as some
haptic feedback modalities act on the user through the input device. Additionally, the
effect of any particular feedback modality is likely also a function of the compatibility
of the input device to that feedback modality. In order to make our experimental
platform the most effective, an input device was chosen that is very similar to what is
currently being used in the field and will likely remain a standard feature of near-term
EOD robotic systems.
Several additional input devices were examined, including the Cyberglove and
Cybergrap, the Novint Falcon, and a master/slave controller. While the Cyberglove
and Cybergrasp may have allowed for detailed force feedback of grasp forces, 21 of
its 22 sensors would have gone unused, as the gripper that was selected had a single
underactuated DOF. Additionally, an effective means was not found to control the
manipulator in addition to the end effector without use of the Cyberforce system or
an optical tracking system, both of which are unlikely to be fielded operationally in
the near term.
The Novint Falcon, while possibly effective in controlling the manipulator, was
not assessed to have a particularly good mapping to the workspace of the full manipulator arm. While several possibilities existed for overcoming this, time was the
primary factor ruling out this input device. Finally, a passive mini-master manipulator could have been built in order to send joint commands to the manipulator,
however, both time and funding prevented this from becoming immediately feasible,
29
2.2. INPUT DEVICE
CHAPTER 2. EXPERIMENTAL SYSTEM
although this type of control has a reasonable chance of being fielded on future EOD
robotic systems.
2.2.1
Logitech Dual Action Gamepad
Because benefits can be gained from using systems that operators are already
familiar with [38], several current robotic platforms are controlled with video game
controllers, rather than bulky operator control units. Therefore, the Logitech Dual
Action Gamepad (Figure 2.2) was used in our setup in order to provide the operator
with a control input with which he likely already had extensive experience.
Each joint on the gamepad controller was linked to a separate joint axis on the
robot. When possible, the mapping between the robot and controller joints was constructed in a logical way based on how the movements would affect the manipulator
frame of reference. For example, left and right motions of the left joystick were
mapped to counter clockwise and clockwise rotations of the torso joint, respectively.
Each axis operated in velocity control mode using a scaled input from the analog
joysticks (The motivation for this choice is given in Section 3.1.3).
The gamepad was connected to the computer using a USB port and was read
using a serial protocol. By utilizing the JavaJoystick.m MATLAB object from the
Revolutionizing Prosthetics library [39], the gamepad was initialized and controlled.
Its twin joysticks were read using encapsulated functions, and the X and Y axes for
each joystick yielded a continuous output of -1 to 1. Button values were placed into an
30
2.3. MANIPULATOR
2.3
CHAPTER 2. EXPERIMENTAL SYSTEM
Manipulator
The manipulator used in this research consisted of a prototype Three-Jaw gripper
and 4-DOF robotic arm.
2.3.1
Three-Jaw Gripper
While most EOD robotic systems utilize a two-jaw gripper or parallel gripper,
there is an effort to transition towards robotic systems that are more anthropomorphic
[24]. The majority of tools and interfaces are built with the human hand in mind,
so it is a logical choice to use grippers that are similar in form and function. While
this may eventually lead to robotic grippers with DOF on the order of the human
hand, it is more likely that transition will first occur by introducing grippers that
are conformal in nature and possess coupled kinematics similar to the human finger
which still take advantage of anthropomorphic morphology, but lack the complexity
of higher-DOF grippers.
The Three-Jaw Gripper (Figure 2.3, reproduced from [40]) built by Contineo
Robotics is inspired by the human hand but is designed to be much more simple. It
contains 9 DOF, but is actuated with a single motor. The excess DOF are underactuated. This design feature allows each finger to naturally conform around a grasping
surface as each link in the kinematic chain makes contact with an object. This design also turns the “palm” of the gripper into a natural grasping surface, increasing
32
2.3. MANIPULATOR
CHAPTER 2. EXPERIMENTAL SYSTEM
Figure 2.3: Contineo Robotics Three-Jaw Gripper
the stability of a given grasp through further kinematic coupling. There is natural
compliance built into each finger joint so that the stalling of a single finger will not
immediately stall the remaining fingers.
The gripper used for these experiments is an early prototype of a family of conformal grippers which are currently in the final stages of development and scheduled
to be released within the year.
The motor is built with a current sensor, tachometer, and encoder. The motor
itself consists of a brushless motor driving a frictional planetary gear with a cycloidal
drive output. The output is then sent through a compound spur gear train which
drives the fingers on the gripper. The final drive ratio is approximately 1000:1.
In an attempt to develop technology that uses as little additional hardware as
possible, we used the current sensor in order to determine the torque being output
by the gripper. In order to better understand the necessary torques required for the
33
2.3. MANIPULATOR
CHAPTER 2. EXPERIMENTAL SYSTEM
gripper to achieve a particular state, the system parameters were identified.
While a mapping of motor torques to accelerations can be achieved by analytically
describing the dynamics of the system, the significant nonlinearity, gearing, backlash,
and compliance would greatly reduce the accuracy of such a technique. As such, empirical methods were pursued in order to discover the parameters of the system. The
equation governing the relationship between current and output torque was assumed
to be of the following form:
I = φ(θ, θ̈) + β(θ, θ̇) + τapplied
(2.1)
Where I is the current driving the motor, θ is the absolute position of the motor, φ
represents the torque needed to accelerate the motor, β represents the torque needed
to close the gripper at a constant velocity, and τ is the current being supplied to apply
torque on an object. Both φ and β were assumed, and experimentally confirmed, to
be dependent on θ as well as θ̈ and θ̇ respectively.
An experiment was performed where the gripper was opened and closed numerous
times, with a variety of speeds. Each open and close command took place over a range
of 400 counts of the encoder on the motor shaft, with 0 being completely open and
400 being completely closed (Figure 2.4). The variable θ was assigned to represent
the position of the gripper in encoder counts, although strictly speaking it did not
represent either the “angle” of the gripper or the motor shaft. This assumption can
be made without loss of accuracy as the mapping of motor position to gripper position
34
2.3. MANIPULATOR
CHAPTER 2. EXPERIMENTAL SYSTEM
Figure 2.4: Mapping of motor encoder counts to gripper position [25, 75, 125, 175,
225, 275, 325, 375]
is an arbitrary one.
The function β was assumed to depend on θ and θ̇. Because velocity terms can
easily be found without acceleration, but not vice-versa in the discrete case, β was
isolated by opening and closing the gripper at different speeds and then removing unapplicable data points. Any data with acceleration was removed, thereby eliminating
φ. Additionally the gripper was not supplying any torque to an object. As such, the
function β was isolated.
35
2.3. MANIPULATOR
CHAPTER 2. EXPERIMENTAL SYSTEM
I = β(θ, θ̇)
(2.2)
Each of the remaining terms were sampled relatively easily, but the function was
further simplified from a multi-input/single-output system to a single-input/singleoutput system by assuming that the function was constant with respect to θ over a
relatively small range of θ. This reduced the complexity of the function to the point
where a least squares solution could map inputs (θ̇) to outputs (I). An nth order
polynomial was constructed to model the relationship between β and θ̇.
βθ (θ̇) =
n
X
pi θ̇i
(2.3)
i=0
As previously stated, this polynomial was assumed to be constant over a relatively
small range of θ. As such, the data was separated into different batches around each
θ range (Figure 3.3). It was experimentally found that eight separate batches of
θ, consisting of 50 counts each, led to functions which resembled the functions from
bordering batches of data, but did not necessarily resemble the functions derived from
data two batches away.
Polynomials were then constructed (Table 2.4) that mapped I to θ̇ in a least
squares sense for a given θ range. A 6th -order polynomial was found to minimize
interpolation error unless the number of data points was sufficiently small, in which
case a 3rd -order polynomial was used in order to prevent overfitting the data.
Because of the inability of polynomial curve fitting to extrapolate to data outside
36
2.3. MANIPULATOR
CHAPTER 2. EXPERIMENTAL SYSTEM
Figure 2.5: Graph displaying the prevention of extrapolation error by adding a horizontal asymptote
of the region for which is was created, a horizontal asymptote (Figure 2.5) was created starting at the final data point and continuing on to higher values of θ̇. While
this assumption of a horizontal asymptote is not perfect, it is a significant improvement over using the polynomial values to predict extrapolated data, and significantly
increased the robustness of the system.
After the required amount of current to drive the gripper with constant velocity
was modeled, a set of data was taken using various terms for the acceleration of the
gripper. Again, the data was sorted into batches based on θ. The measured velocity
for each data point was used in order to subtract off the current being used to drive
the gripper at that velocity. According to the model, the remaining torque should
37
2.3. MANIPULATOR
p0
p1
p2
p3
p4
p5
p6
0◦ − 50◦
6.413E-05
-0.0170
1.589
7.585
CHAPTER 2. EXPERIMENTAL SYSTEM
50◦ − 100◦
-3.369E-10
1.698E-07
-3.286E-05
0.00306
-0.144
3.726
0.556
100◦ − 150◦
-2.6441E-10
1.40E-07
-2.8685E-05
0.00284
-0.141
3.760
0.530
150◦ − 200◦
-2.161E-10
1.207E-07
-2.602E-05
0.00270
-0.139
3.807
0.558
Table 2.1: Three-Jaw Gripper torque/velocity polynomial values for θ ∈ {0-200}
have been due to the acceleration of the gripper as there was no torque applied. The
data was fitted to an k th order polynomial relating current to θ̈
φθ (θ̈) =
k
X
pi θ̈i
(2.4)
i=0
It was found that data did not imply a simple non-linear function between θ̈
and current, but rather that the effects of static friction were significantly more of a
determining factor than inertial effects when the gripper was already in motion. As
such, the model was revised to be of the following form:
I = β(θ, θ̇) + τapplied + ψ
(2.5)
Where ψ was a function modeling the effects of static friction. With this corrected
model, the method for determining the function β did not change as ψ only had nonzero values where acceleration was present.
38
2.3. MANIPULATOR
p0
p1
p2
p3
p4
p5
p6
200◦ − 250◦
-3.1353E-10
1.648E-07
-3.340E-05
0.00325
-0.155
3.916
0.556
CHAPTER 2. EXPERIMENTAL SYSTEM
250◦ − 300◦
-2.1861E-10
1.194E-07
-2.506E-05
0.00252
-0.1276
3.520
0.503
300◦ − 350◦
-2.691E-10
1.370E-07
-2.681E-05
0.00253
-0.121
3.334
0.471
350◦ − 400◦
1.515E-05
-0.00675
1.0638
14.600
Table 2.2: Three-Jaw Gripper torque/velocity polynomial values for θ ∈ {200-400}
2.3.2
Robotic Arm
Several manipulators were examined for use, including the WAM arm from Barrett technology, the TALON manipulator, the Packbot manipulator, and the HD-2
manipulator from Northrop Grumman. While the WAM arm provided the most capabilities and even allowed for upper arm kinesthetic feedback, it bears the least
resemblance to currently fielded EOD systems and an examination of the effects of
greater DOF and dexterity on EOD teleoperation performance likely could be its own
study.
While both the TALON and the Packbot are heavily used systems, both would
have been somewhat difficult to come by and offered fewer options for reading data
from the robotic system into a laptop for processing. The HD-2 arm (Figure 2.6)
on the other hand, while not currently commercially available, was acquired on loan
from Contieo Robotics and was readily controllable using a MATLAB GUI. This GUI
was able to be integrated with a GUI made for feedback purposes in order to simplify
the setup.
39
2.3. MANIPULATOR
CHAPTER 2. EXPERIMENTAL SYSTEM
Table 2.3: Three-Jaw Gripper - torque/velocity identification raw data
40
2.3. MANIPULATOR
CHAPTER 2. EXPERIMENTAL SYSTEM
Table 2.4: Three-Jaw Gripper - torque/velocity identification filtered data with polynomial fit curves
41
2.4. SENSORS
CHAPTER 2. EXPERIMENTAL SYSTEM
The HD-2 Manipulator is a 4 DOF (typically 5, but the Contineo Gripper Prototype lacked wrist roll) manipulator which measures 52 inches when fully extended.
It has a lift capability of 125 lb close to the body and 40 lb at full extension.
Due to limitations described in Chapter 3, the arm was not used for experimentation. It was, however, an important part of the system setup as it gave insight into
the difficulties of controlling an EOD manipulator and gripper in joint space with an
input device with fewer DOF than the robot.
2.4
Sensors
Several sensors were considered in order to acquire haptic information. Initially,
Polyvinylidene fluoride pressure sensors, strain gages, accelerometers, and various
other sensors were examined in order to measure applied pressure and the state of
the robot. However, due to time and equipment limitations, it was decided to use the
current information from the motor and accelerometer data, thereby measuring both
applied forces and vibrational effects.
2.4.1
Accelerometer
In order to sense high-frequency vibration of the gripper, the Kistler Piezotron 3
DOF Accelerometer was used. In sensing early contact, vibrational effects from the
discontinuity of contact are more important than applied forces. The underactuation
42
2.4. SENSORS
CHAPTER 2. EXPERIMENTAL SYSTEM
Figure 2.6: Northrop Grumman HD-2 Manipulator
and compliance of the manipulator essentially places several cascaded low pass filters
between the finger tips and the base of the gripper. To account for this, while keeping
the sensor out of the potential grasping area, the accelerometer was mounted on the
distal phalanx of the single opposable finger as shown in Figure 2.7.
The values from the sensor were passed through a power supply/signal conditioner,
and then read through an A/D input on an Arduino Duemilanove microcontroller.
43
2.5. FEEDBACK DEVICES
CHAPTER 2. EXPERIMENTAL SYSTEM
Figure 2.7: Mounted Kistler Accelerometer
2.5
Feedback Devices
Taking inspiration from several proven methods in robotic minimally invasive
surgery, both surrogate visual feedback force feedback and surrogate vibrotactile force
feedback were provided to the user.
2.5.1
Vibrotactor
The VPM vibrotactor was selected as a vibrotactor as it was small enough to be
easily mounted to the input device as shown in Figure 2.8. Additionally, it drew
little enough current such that it could safely be driven directly through the pulse
width modulation channel of the microcontroller without any additional amplifying
44
2.5. FEEDBACK DEVICES
CHAPTER 2. EXPERIMENTAL SYSTEM
Figure 2.8: Mounted Vibrotractor
circuit. Such a vibrotactor is also known as a pager motor, due to their ubiquitous
use in (originally) pagers and (now) cell phones to indicate an incoming message/call
without a significant audible signal.
2.5.2
Graphical Feedback System
A graphic feedback system (Figure 2.9) was designed using MATLAB in order to
guide the user through experimentation and also to provide camera information and
visual force information. The system was built using the MATLAB GUI Development
Environment and provided users with the camera feed, the visual force bar, the full
state of the system (position, velocity, current), and the control frequency for purposes
of debugging and ensuring that the system was working properly.
45
2.6. SYSTEM INTEGRATION
2.6
CHAPTER 2. EXPERIMENTAL SYSTEM
System Integration
Due to the number of interworking parts used in experimentation, system integration was extremely important and also the most time consuming part of this
research. Several important pieces were needed to operate the system successfully.
First a microcontroller was needed in order to read sensor information through it’s
A/D channel. It was also used as a simple interface for sending commands to the
vibrotactor. Next a camera was needed to display visual information to the user.
Finally, an accurate F/T sensor on an instrumented object was needed in order to
accurately measure the forces being applied for purposes of data logging, as the information from the sensors on the robot were inaccurate and did not measure forces
directly.
2.6.1
Microcontroller
In order to integrate the components of the system, an A/D converter was needed
to read sensor information into MATLAB. Additionally, a variable voltage source was
required in order to drive the vibrotactors.
The Arduino Duemilanove, shown in Figure 2.10 (reproduced from [41], was chosen
for its low cost and ease of use. It contains 14 digital input/output pins, including 6
that are capable of pulse width modulation, 6 analog input pins, 3.3 and 5V reference
signals, and serial connection pins. The Arduino can be powered with a 9V battery
47
2.6. SYSTEM INTEGRATION
CHAPTER 2. EXPERIMENTAL SYSTEM
Figure 2.10: Arduino Duemilanove
or a USB connection and operates at a clock frequency of 16MHz and has 32KB of
flash memory thanks to the ATmega328 chip that it employs. The Arduino is coded
in C/C++ using the Wiring Library using an Integrated Development Environment.
The Arduino was used as an I/O device and also as an A/D converter. The
Arduino was mounted to the back of the Dual Axis Gamepad. Commands were sent
to the vibrotactors using the pulse width modulation pins. Commands were received
from MATLAB as serial messages ranging from 0-100. These messages were scaled to
binary levels (0-255) and supplied to the vibrotactor using the D/A channel. These
levels corresponded to 0-5V respectively.
When MATLAB required data from the senors wired to the Arduino, it sent the
message ‘p’ (for ping) through the serial port. This result in the Arduino returning
all of the applicable sensor data from the A/D converter in addition to a time stamp.
48
2.6. SYSTEM INTEGRATION
CHAPTER 2. EXPERIMENTAL SYSTEM
Figure 2.11: Logitech Quickcam
2.6.2
Digital Video Camera
Similar to the OCU’s on which EOD technicians currently view output from cameras mounted to robots, the user viewed the scene through a camera feed rather than
looking at it directly. In general, the displays on EOD robots do not tend to be highfidelity systems. Additionally, the view from any given angle is typically occluded by
either the robotic gripper or the robotic arm. As such, a camera was selected based
on price only, without considering quality. The Logitech Quickcam (Figure 2.11) was
suitable and readily available. It was read into MATLAB through a USB port and
displayed to the user.
With both the camera display and communication through each part of the system
running through MATLAB, the frame rate suffered significantly over what it might
have been had the system been in a stand-alone custom program. Efforts were made
49
2.6. SYSTEM INTEGRATION
CHAPTER 2. EXPERIMENTAL SYSTEM
to increase the control frequency to an acceptable rate (∼10 Hz) but some delay and
digitization was desired in order to create a reasonable facsimile EOD telemanipulation.
An example of the video quality can be seen in Figure 2.9. Again, in order to
replicate working environments, one finger of the robot was intentionally occluded
and no attempt was made to fix the resolution of the camera feed.
Although the user was not able to directly view the object, the user was allowed to
listen to auditory cues from the object and gripper motor. While the option of blocking the user’s aural channel was considered, most EOD robots have a microphone and
speaker system on the OCU through which the user can receive auditory information
about the state of the robot. Thus, we allowed the natural aural feedback to remain.
More accuracy could have been achieved by recording and playing the audio in sync
with the video.
2.6.3
Force/Torque Sensor
In order provide measurements on the amount of force being exerted by the user
during the experiments, an accurate force/torque (F/T) sensor was needed. The ATI
Mini45 F/T sensor, shown in Figure 2.12, reproduced from [42], was chosen for its
balance of package size and durability, as well as its relatively high sensitivity in all
six degrees of freedom. The sensing range and resolution for forces and torques are
displayed in Tables 2.5 and 2.6, respectively.
50
2.6. SYSTEM INTEGRATION
Sensing Range
Resolution
CHAPTER 2. EXPERIMENTAL SYSTEM
Fx
145 N
1/16 N
Fy
145 N
1/16 N
Fz
290 N
1/16 N
Table 2.5: Sensing range and resolution of forces for the ATI Mini45
Sensing Range
Resolution
Tx
5 N-m
1/752 N-m
Ty
5 N-m
1/752 N-m
Tz
5 N-m
1/1504 N-m
Table 2.6: Sensing range and resolution of torques for the ATI Mini45
To effectively utilize and protect the sensor, it was built into an instrumented
object (Figure 2.13) onto which the robotic end effector could grip. In order to provide
a sufficiently linear correlation between grasp position and grasp force, the force sensor
was placed between two compliant objects. Each object was hemispherical and had a
radius of 32mm. Each hemisphere was composed of Smooth-On OOMOO-25 Silicon
Rubber (see Appendix B.4).
The process of curing the rubber involves mixing equal volumes of two compounds
(A and B) together for 5 minutes, and then pouring into a mold and letting cure for 75
minutes. In order to produce hemispheres that follow Hooke’s Law for a fairly large
degree of compression, efforts were made to decrease the hardness of the resulting
silicon compound. By violating the 1:1 ratio, it was found that the hardness of the
compound could be controlled with a high degree of repeatability. The following
ratios were tested: 1:3, 2:3, 1:1, 3:2, and 3:1.
51
2.6. SYSTEM INTEGRATION
CHAPTER 2. EXPERIMENTAL SYSTEM
Figure 2.12: ATI Mini45 Force/Torque Sensor
It was found that the hardness of the silicon was proportional to the content of
compound A. In all cases other than the control, the curing times were significantly
higher than the recommended 75 minutes. This was particularly true of the combinations with a high content of compound B. For the 1:3 combination, the cure time
was on the order of 10 hours.
It was qualitatively found that the 1:3 silicon compound was selected and had
significantly lower hardness than either the control compound or the 3:1 compound.
The F/T sensor was placed between the two silicon hemispheres and separated
from them by an acrylic disk. Screws were set into the hemispheres, passed through
the acrylic, and were secured to the F/T sensor. A slit was cut out of the left
hemisphere in order to allow the data cable to exit the instrumented object properly.
52
2.6. SYSTEM INTEGRATION
CHAPTER 2. EXPERIMENTAL SYSTEM
Figure 2.13: Grasping object instrumented with the ATI Mini45 F/T Sensor - Pen
for scale
2.6.4
Framework and Setup
The various parts of the system was linked together as shown in Figure 2.14. Two
laptops were required, as 5 Universal Serial Bus (USB) ports were required in addition
to a Personal Computer Memory Card International Association (PCMCIA) card.
Data logging took place on both laptops. On the first laptop, MATLAB logged
the following information about the gripper and the input device: time, position,
velocity, current, calculated pressure, user input, feedback mode. On the second
laptop information from the F/T sensor was recorded. Although 6 measurements
were available from the sensor, only the force in the Z direction (aligned with the
53
2.6. SYSTEM INTEGRATION
CHAPTER 2. EXPERIMENTAL SYSTEM
principle axis of the object) was used for analysis.
Figure 2.14: System Framework
54
Chapter 3
Experiment
Using the apparatus described in Chapter 2, an experiment was performed to test
the users’ ability to teleoperate the gripper to grasp the instrumented object with
minimal force.
3.1
Preliminary Experiments
Several preliminary experiments took place in order to determine which experiments and methods would be most appropriate. First, the output from the grippermounted accelerometer was measured. Next the F/T sensor and current sensor output
were measured in tandem to reveal their similarities or differences. Finally, early tests
determined which control scheme would be the most effective for controlling the robot
with the input device.
55
3.1. PRELIMINARY EXPERIMENTS
CHAPTER 3. EXPERIMENT
Figure 3.1: Accelerometer test output showing three separate grasps, noted in red, of
the instrumented object
3.1.1
Accelerometer Test
Although accelerometer data has been used successfully in prior work [19], it was
done with smooth, low backlash systems with no gearing. In contrast, our system was
heavily geared and had significant backlash. An experiment was done to determine
how this would affect the accelerometer output.
With the accelerometer mounted to the gripper, its output was read into an oscilliscope while the gripper moved through a series of poses. The gripper closed onto
the compliant object, squeezed, and opened several times. The output at first seemed
to indicate that the vibrations from the gearing masked the effects of grasping the
object completely (Figure 3.1).
56
3.1. PRELIMINARY EXPERIMENTS
CHAPTER 3. EXPERIMENT
Upon further inspection and filtering, however, it was found that event detection
could take place with the accelerometer, not by looking for increases in the signal
where contact took place, but rather, where the signal is damped by the low-pass
filter effect of the compliant object. Applying a frequency analysis of the signal (e.g.,
applying a Fast Fourier Transform) likely could have made the data more readily
usable, but an adequate means of reading the signal and applying the transform
was not immediately available. Although the data showed that the accelerometer
was technically usable, it was decided that the sensor should not be used, as the
identification of contact would not be consistent between compliant objects and rigid
ones.
3.1.2
F/T Sensor and Current Sensor Test
In order to test the effectiveness of the current sensor data, it’s output was compared against the F/T sensors. Data from both was logged with both which the
gripper contacting, squeezing, and releasing the object four times in succession. Each
squeeze was intended to be harder than the last.
As can be seen, the data from the F/T sensor (Figure 3.2), provided information
clearly showing each grasp as it took place. As intended, the strength of each grasp
increased from the one before it.
This information can be compared to the output from the calculated torque information using the current sensor (Figure 3.3). First, it is obvious that there are
57
3.2. METHODS
CHAPTER 3. EXPERIMENT
forward supplied a ramp input to the same PID controller. This scheme would have
felt similar to velocity control, but would have differed in that the former when given
a command of 0 would have continued to try to reach its desired position whereas the
latter would have immediately stopped.
Finally, velocity control was examined and was ultimately found to provide much
better performance than the other schemes. This is in part due to low sampling
rate, which caused instability during position control. This instability could be decreased by adjusting gains, but the responsiveness of the system suffered as a result.
Ultimately, velocity control was used for all further experimentation.
3.2
Methods
Three experiments were originally intended. The first would test the effects of
haptic feedback on the user’s ability to detect contact with an object. The second
would test the effects of haptic feedback on the user’s ability to accurately apply
a given level of force to an object. The third tested the system qualitatively in a
situation similar to the operational environment.
Because of the results of the F/T and Current Sensor test, the second experiment
was not seen as applicable as the force information gained from the current sensor
was indirect at best. Time was the limiting factor for the third experiment, though it
is a priority for future work. Thus, the experiment described below is for the contact
60
3.2. METHODS
CHAPTER 3. EXPERIMENT
detection task.
3.2.1
Procedure
This experiment measured the effectiveness of the system in decreasing peak and
sustained forces applied by the user in a contact/grasping task. The user was given
control of the gripper via the joystick on the gamepad. The user was then instructed
to close the gripper as lightly as possible until two opposing fingers came into contact
with an object instrumented with the F/T sensor. The object and manipulator were
placed such that the fingers of the gripper closed around the principal axis of the
instrumented object (Figure 3.4). The user was allowed to use the following forms
of information in order to detect when contact had occurred: visual information
through the camera display, a surrogate force feedback visual display, or a surrogate
vibrotactile force feedback display.
Five subjects were recruited between the ages of 24 and 28. Three were male, two
were female. Four were right-hand dominant, one was left-hand dominant. None of
the subjects had any neurological disorders, injuries to their dominant hand, impaired
vision, or any other circumstance which might affect their ability to successful perform
the task. The users gave informed consent. The protocol was approved by the Johns
Hopkins University Institutional Review Board.
Before the trials began, each user took a brief pre-experiment survey. The subject
was shown the experimental setup, including the robotic gripper. Then the subject
61
3.2. METHODS
CHAPTER 3. EXPERIMENT
Figure 3.4: The setup of the gripper and instrumented object during the experiment
was seated such that he or she could not see the gripper and instrumented object
except through the camera setup. After explaining the types of feedback to expect,
the subject was allowed to freely test the system with all feedback modes present for
an unlimited period of time. Following this, the subject notified the researcher that
he or she was ready to begin the experiment. The subject was instructed to press the
“2” button in order to start the experiment.
After pressing “2” for the first time, the GUI randomly selected the first feedback
modality to be given to the user. This information was displayed on the GUI so that
the subject would know what to expect.
The following exchange then took place:
Experimenter: “Close the gripper.” The subject would then proceed to close the
gripper.
62
3.2. METHODS
CHAPTER 3. EXPERIMENT
Subject: “Done.” The subject would respond as such when they believed they
were in contact with the object. The experimenter would check and then respond.
Experimenter: “Good. Open the gripper and press 2” OR “No contact. Close the
gripper more.”
Pressing the “2” button during a trial would end that trial and begin the next
trial. The same procedures were used in each trial. The trial number was displayed
on the GUI. After every 5 trials, the GUI would randomly select one of the remaining
feedback modalities.
At the end of the experiment, the user was asked to take a brief post-experiment
survey. The user ranked the performance of the task under each feedback condition
from the following options:
(1) Very Easy
(2) Easy
(3) Moderate
(4) Hard
(5) Very Hard
Following this, the user was asked to comment on which strategies he or she used
for each task and any further comments.
63
3.3. RESULTS
CHAPTER 3. EXPERIMENT
In order to determine whether users’ performance under the different feedback
modalities were statistically significantly different, ANOVA was used with a αF W of
.05. Box’s epsilon-hat adjustment was used to correct for violations of sphericity. The
statistical results are shown in Table 3.2.
Table 3.2: Table of statistical significant. (1) No feedback, (2) Surrogate Visual
Feedback, (3) Surrogate Vibrotactile Feedback
The improvements in performance of vibrotactile feedback over no feedback were
statistically significant, while the performance due to the visual feedback were not.
In addition to the experimental data, users’ preferences as stated on their surveys
was collected. This data was placed into Table 3.3 and the average results for each
modality are displayed in the Figure 3.6.
As the data shows, improvements were made by giving the user haptic feedback.
The vibrotactile feedback provided larger improvements for this task and was also
None
Visual
Vib
Sub1
4
3
1
Sub2
3
2
1
Sub3
3
3
2
Sub4
2
4
3
Sub5
3
3
1
Average StDev
3
0.4
3
0.4
1.6
0.72
Table 3.3: Post-experiment survey average results. (1) - Very Easy, (2) - Easy, (3) Moderate, (4) - Hard, (5) - Very Hard
65
3.3. RESULTS
CHAPTER 3. EXPERIMENT
the motion of the object in all cases, and used the vibration or visual bar as a check.
As this object was both compliant and lightweight, it suggests that this feedback
may be even more helpful in the event that a heavy, rigid object, such as a piece of
ordnance, needs to be manipulated.
67
Chapter 4
Conclusions
Teleoperation systems that operate without haptic feedback are significantly limited in what they can accomplish when compared to high-performance haptic feedback
systems. The benefits of even limited amounts of feedback have been shown in literature, and in the experiments described here, to be substantial. Despite this, no
currently fielded EOD robotic systems display any type of force information to the
user. This seriously impedes the ability of an EOD technician to work on a piece of
ordnance remotely and limits them to gross manipulation and pick-and-place tasks.
Cost-effective, robust solutions to this problem are particularly applicable to the EOD
environment.
68
4.1. CONTRIBUTIONS
4.1
CHAPTER 4. CONCLUSIONS
Contributions
In the first chapter, the perceived benefits of haptic feedback for EOD are described. While actual force feedback can increase user performance, the use of sensory
substitution, in the form of a visual, audio, or tactile display can provide the information to the user without dynamically affecting the use of the input device. The
history of robotics in Explosive Ordnance Disposal is described as well. While this
review is not a comprehensive description of the numerous projects that have taken
place over the years, it is, to the best of the author’s knowledge, the most thorough
examination of the topic in a single document.
In the second chapter, the experimental setup is described. This setup consisted of
a robotic gripper, manipulator arm, camera setup, microcontroller and various other
components. The chapter focused on the successful integration of these components.
Finally, in the third chapter the experimental methods and results are described.
The experiment tested users’ ability to detect a contact event with two types of
surrogate haptic feedback. Vibrotactile feeedback was found to reduce the threshold
at which the user detected contact from 8.43 N to 5.97 N on average. Additionally,
in a survey given to users at the end of the experiment, users were found to prefer
this type of feedback over the other conditions. Surrogate visual feedback did not
substantially increase user performance and was not well received by the subjects.
69
4.2. FUTURE WORK
4.2
CHAPTER 4. CONCLUSIONS
Future Work
This research can be continued to further explore the best design practice and
performance measures for haptic feedback for teleoperated EOD robots.
4.2.1
Additional Experiments
As part of this work, we designed two additional experiments that have not yet
been performed, as they required additional sensors.
4.2.1.1
Sustained Force Experiment
Having established the effects of haptic feedback on user contact forces, another
experiment should measure the ability of the system to assist the user in repeating
grasps of a constant and sustained force. The user would be given control of the
manipulator without needing control of the robotic arm. The gripper would be placed
in a position to contact the principal axis of the instrumented object when it was
closed. The user would then be trained to know when a certain force threshold had
been hit. The information given from the current sensor is not sufficient for this task
as the system is not backdrivable, and the gearing, rather than the current, holds the
sustained force. Thus, this experiment would only be possible if other sensors, such
as strain gages, could be used to sense applied force.
The subject would slowly close the gripper while observing a given feedback
70
4.2. FUTURE WORK
CHAPTER 4. CONCLUSIONS
method. Feedback would be given from the computer when the subject reached a
particular force threshold. This training would be repeated five times. The feedback
methods that the user is allowed to observe would be the following: visual, surrogate
visual, and surrogate vibrotactile.
Following training, the subject would be asked to achieve the given force threshold
using only the information given by the selected feedback modality.
4.2.1.2
Real-World Task
Another important experiment would replicate a real-world task that an EOD
technician would likely undertake. Using this task, qualitative data would be gained
correlating the gains from the first two tasks, to real world gains on actual mission
performance. A domain expert is critical for this experiment in order to provide
information on specific benefits gained through haptic feedback.
4.2.2
Further Areas
There are several additional areas where future work may be beneficial. First, the
experiments focused primarily on near-term haptic technologies for the EOD environment. Further research needs to be done to explore the effectiveness of a larger
variety of feedback modalities, particularly those that have potential to be significant
enabling technologies in the future. Bilateral manipulator arms and kinesthetic feedback through the manipulator should be examined in full, as should the full range of
71
4.2. FUTURE WORK
CHAPTER 4. CONCLUSIONS
near-term sensory substitution technologies that were not examined in this work.
Additionally, research should be done to generalize the results for manipulators
that are truly dexterous and possess degrees of freedom on the order of the human
hand. Doing so would allow for further applications of this technology to later iterations of EOD robots, which will likely feature components that are anthropomorphic
and dexterous.
Finally, the setting which EOD robots are fielded add significant further constraints to teleoperation and the design of useful feedback systems. Specifically, the
effects of digitization and delay are both significant and may have important implications for the effectiveness of various types of feedback modalities. These system
properties should be measured and techniques should be explored in order to minimize their detrimental effects. In addition, there are significant constraints on the
size of mobile OCUs and operator input devices.
It is imperative that work such as this continues so that EOD technicians can
have the best possible equipment in the field.
72
Bibliography
[1] H. G. Nguyen and J. P. Bott, “Robotics for Law Enforcement: Applications Beyond Explosive Ordnance Disposal,” in Proceedings of SPIE International Symposium on Law Enforcement Technologies, 2000.
[2] Department
of
Defense,
“Casualty
report,”
2011,
http://www.defense.gov/news/casualty.pdf.
[3] C. Wilson, “Improvised Explosive Devices (IEDs) in Iraq and Afghanistan: Effects and Countermeasures,” 2007.
[4] A. M. Bottoms and C. Scandrett, Eds., Applications of Technology to Demining.
Monterey, California: Society for Counter Ordnance Technology, 2002.
[5] GlobalSecurity.org, www.globalsecurity.org.
[6] Rear Admiral Michael P Tillotson, USN, Navy EOD: Then and Now - Frogmen,
UDTs, SEALs and Explosive Ordnance Disposal Teams from World War II to
73
BIBLIOGRAPHY
BIBLIOGRAPHY
Iraq, United States Navy Memorial, 2010, comments as a Panelist at the United
States Navy Memorial on 13OCT10.
[7] R. Potter, “How to compete with offshore low labor costs: Employ highly skilled
labor at 30 cents per hour,” http://www.robotpackaging.com.
[8] Kawasaki Robotics, www.kawasakirobotics.com.
[9] J. Carlson and R. Murphy, “How UGVs physically fail in the field,” IEEE Transactions on Robotics, vol. 21, no. 3, June 2005.
[10] Public
Broadcasting
System,
“The
cost
of
land
mines,”
http://www.pbs.org/saf/1201/features/landmines2.htm.
[11] J. Krasner, “Robots going in harms way,” The Boston Globe, March 2007,
http://www.boston.com/business/technology/articles/2007/03/12/robotsgoinginharmsway
[12] G. Niemeyer, C. Preusche, and G. Hirzinger, Springer Handbook of Robotics,
B. Siciliano and O. Khatib, Eds. New York, NY: Springer, 2008.
[13] M. J. Massimino, “Improved force perception through sensory substitution,”
Control Engineering Practice, vol. 3, no. 2, pp. 215–222, February 1995.
[14] J. C. Gwilliam, M. Mahvash, B. Vagvolgyi, A. Vacharat, D. D. Yuh, and A. M.
Okamura, “Effects of haptic and graphical force feedback on teleoperation palpation,” in International Conference on Robotics and Automation, 2009.
74
BIBLIOGRAPHY
BIBLIOGRAPHY
[15] M. Kitagawa, “Indirect feedback of haptic information for robotassisted telemanipulation,” Master’s thesis, The Johns Hopkins University, September 2003.
[16] M. Kitagawa, D. Dokko, A. M. Okamura, and D. D. Y. and, “Effect of sensory substitution on suturemanipulation forces for robotic surgical system,” The
Journal of Thoracic and Cardiovascular Surgery, vol. 129, no. 1, pp. 151–158,
2003.
[17] C. E. Reiley, “Evaluation of augmented reality alternatives to direct force feedback in robot-assisted surgery: Visual force feedback and virtual fixtures,” Master’s thesis, The Johns Hopkins University, April 2007.
[18] R. E. Schoonmaker and C. G. L. Cao, “Vibrotactile force feedback system for
minimally invasive surgical procedures,” in IEEE Conference on Systems, Man,
and Cybernetics, October 2006.
[19] K. Kuchenbecker, J. Gewirtz, W. McMahan, D. Standish, P. Martin, J. Bohren,
P. Mendoza, and D. Lee, “Verrotouch: High-frequency acceleration feedback for
telerobotic surgery,” in Lecture Notes in Computer Science, vol. 6191, 2010, pp.
189–196.
[20] M. R. Tremblay and M. R. Cutkosky, “Using sensor fusion and contextual information to perform event detection during a phase-based manipulation task,” in
International Conference on Intelligent Robots and Systems, August 1995.
75
BIBLIOGRAPHY
BIBLIOGRAPHY
[21] J. M. Hyde, M. R. Tremblay, and M. R. Cutkosky, “An object-oriented framework for event-driven dextrous manipulation,” in 4th International Symposium
on Experimental Robotics, June 1995.
[22] J. D. Bartleson, History of U.S. Navy Bomb Disposal. Virginia Beach, Virginia:
U.S. Navy Explosive Ordnance Disposal Association, 1992.
[23] A. B. Hartley, Unexploded Bomb.
New York, New York: W W Norton and
Company Inc, 1958.
[24] Byron Brezina et al., “Analysis of Alternatives Advanced Explosive Ordnance
Disposal Robot System,” 2008.
[25] The Times, “Lieutenant-Colonel ‘Peter’ Miller: Inventor of the Wheelbarrow
remote control bomb disposal device that saved countless lives,” 2006.
[26] P. Birchall, The Longest Walk: The World of Bomb Disposal.
London: Arms
and Armour Press, 1997.
[27] Defense Industry Daily, www.defenseindustrydaily.com.
[28] Byron Brezina. Telephone correspondence. Interview conducted on 12DEC10.
[29] iRobot, www.iRobot.com.
[30] Hydroid Incorporated, www.hydroidinc.com.
[31] Rob Simmons. Telephone correspondence. Interview conducted on 23NOV11.
76
BIBLIOGRAPHY
BIBLIOGRAPHY
[32] C. Debolt, Ed., Applications of Technology to Demining.
Society for Counter
Ordnance Technology, 2002, ch. The BUGS ”Basic UXO Gathering System”
project for UXO clearance & mine countermeasures.
[33] T. N. Nguyen, C. O’Donnell, and T. B. Nguyen, “Multiple autonomous robots
for UXO clearance, The Basic UXO Gathering System (BUGS) Project,” vol. 3.
[34] “Lightweight robot for demining,” Applications of Technology to Demining,
vol. 2, no. 1, 2002.
[35] J. D. Nicoud, “Vehicles and robots for humanitarian demining,” vol. 24, pp.
164–168.
[36] M. Buehler, “RTK - Remote Touch Kit: Final Technical Report and Test Results,” September 2010, Report prepared for iRobot.
[37] A. Kron, G. Schmidt, B. Petzold, M. I. Zah, P. Hinterseer, and E. Stenbach,
“Disposal of explosive ordnance by use of a bimanual haptic telepresence system,”
in Proceedings of IEEE International Conference on Robotics and Automation,
vol. 2, May 2004, pp. 1968 – 1973.
[38] Panel on Human Factors in the Design of Tactical Display Systems for the Individual Solider, National Research Council, Tactical Displays for Soldiers: Human
Factors Considerations. National Academies Press, January 1997.
[39] The Johns Hopkins University Applied Physical Laboratory Revolutionizing
77
BIBLIOGRAPHY
BIBLIOGRAPHY
Prosthetics 2009 Team, “Revolutionizing Prosthetics 2009 MATLAB Repository.”
[40] Contineo Robotics, www.contineo-robotics.com.
[41] M. Banzi and D. Cuartielles, www.arduino.cc.
[42] ATI Industrial Automation, www.ati-ia.com.
78
Appendix A
Code
A.1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
HapGui.m
function varargout = HapGui(varargin)
% HAPGUI M-file for HapGui.fig
%
HAPGUI, by itself, creates a new HAPGUI or raises the existing
%
singleton*.
%
%
H = HAPGUI returns the handle to a new HAPGUI or the handle to
%
the existing singleton*.
%
%
HAPGUI(’CALLBACK’,hObject,eventData,handles,...) calls the local
%
function named CALLBACK in HAPGUI.M with the given input arguments.
%
%
HAPGUI(’Property’,’Value’,...) creates a new HAPGUI or raises the
%
existing singleton*. Starting from the left, property value pairs are
%
applied to the GUI before HapGui_OpeningFcn gets called. An
%
unrecognized property name or invalid value makes property application
%
stop. All inputs are passed to HapGui_OpeningFcn via varargin.
%
%
*See GUI Options on GUIDE’s Tools menu. Choose "GUI allows only one
%
instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES
22
23
% Edit the above text to modify the response to help HapGui
24
25
% Last Modified by GUIDE v2.5 19-Jan-2011 12:26:50
79
A.1. HAPGUI.M
APPENDIX A. CODE
26
27
28
29
30
31
32
33
34
35
36
37
% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct(’gui_Name’,
mfilename, ...
’gui_Singleton’, gui_Singleton, ...
’gui_OpeningFcn’, @HapGui_OpeningFcn, ...
’gui_OutputFcn’, @HapGui_OutputFcn, ...
’gui_LayoutFcn’, [] , ...
’gui_Callback’,
[]);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
38
39
40
41
42
43
44
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
% --- Executes just before HapGui is made visible.
function HapGui_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject
handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles
structure with handles and user data (see GUIDATA)
% varargin
command line arguments to HapGui (see VARARGIN)
handles.arduino = 1;
if handles.arduino
s1 = serial(’COM4’); %define serial port for the sensor board input
s1.BaudRate=9600; %define baud rate
fopen(s1); %open serial port
handles.s1 = s1; %Establishes a global variable for accessing the serial port
end
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
handles.run = 1; %Establishes the global variable "run"
handles.numTrials = 0; % Keeps track of which individual trial the user is on
handles.conditionNum = 1;
handles.maxConditions = 3;
handles.maxTrials = 5; % How many trials the user will do with each setting
handles.w = rand(3,1);
disp(’Arduino... READY’)
disp(’Updating Paths...’);
cd C:\Users\owner\Desktop\TEMPBurtness\RP2009\VRE\Common
addpath_Common
disp(’Paths Updated - READY’);
disp(’Initializing Manipulator and Controller...’);
J = JavaJoystick;
M = MainDrive;
80
A.1. HAPGUI.M
76
77
78
79
80
81
82
83
84
85
86
87
88
APPENDIX A. CODE
handles.M = M;
handles.J = J;
handles.pressure = 0;
%handles.vid = videoinput(’winvideo’, 1, ’YUY2_320x240’);
%vid = handles.vid;
%vid.ReturnedColorSpace = ’grayscale’;
disp(’Initialization Complete’);
who
tic
handles.command = 0;
guidata(hObject, handles);
% UIWAIT makes HapGui wait for user response (see UIRESUME)
% uiwait(handles.figure1);
89
90
91
92
93
94
95
96
% --- Outputs from this function are returned to the command line.
function varargout = HapGui_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject
handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles
structure with handles and user data (see GUIDATA)
97
98
% Get default command line output from A handles structure
99
100
101
102
103
104
105
% --- Executes on button press in VisualForce.
function VisualForce_Callback(hObject, eventdata, handles)
% hObject
handle to VisualForce (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles
structure with handles and user data (see GUIDATA)
106
107
% Hint: get(hObject,’Value’) returns toggle state of VisualForce
108
109
110
111
112
113
114
% --- Executes on button press in vibroForce.
function vibroForce_Callback(hObject, eventdata, handles)
% hObject
handle to vibroForce (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles
structure with handles and user data (see GUIDATA)
115
116
% Hint: get(hObject,’Value’) returns toggle state of vibroForce
117
118
119
120
121
122
123
% --- Executes on button press in Execute.
function Execute_Callback(hObject, eventdata, handles)
% hObject
handle to Execute (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles
structure with handles and user data (see GUIDATA)
124
125
M = handles.M;
81
A.1. HAPGUI.M
126
127
128
129
130
131
132
133
134
APPENDIX A. CODE
J = handles.J;
Kspeed = .08; %Increases the gain of the speed
lastButton = 1;
trial = 1;
w = rand(3,1);
data = cell(3,5);
dataSamp = 1;
currentData = zeros(10000,7);
tStart = tic;
135
136
137
138
139
140
for i = 1:3
for j = 1:5
data(i,j) = mat2cell(zeros(10000,2));
end
end
141
142
143
144
145
146
147
while handles.conditionNum <= handles.maxConditions
while handles.numTrials <= handles.maxTrials% While you’re not done with all 5 trials
while handles.run == 1 %While you’re not done with this specific trial.
%% Selecting a controller.
load VOID
%load polynomialValues;
148
149
150
151
if handles.conditionNum > 1 && handles.numTrials == 0
handles.numTrials = 1;
end
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
if handles.numTrials == 0
VibForce = 1;
VisForce = 1;
end
handles.w
handles.run
maxWind = 0;
if handles.numTrials > 0;
[maxW, maxWind] = max(w);
if maxWind == 1
VibForce = 0;
VisForce = 0;
set(handles.modeText,’String’, ’None’)
elseif maxWind ==2
VibForce = 0;
VisForce = 1;
set(handles.modeText,’String’, ’Visual’)
else
VibForce = 1;
VisForce = 0;
set(handles.modeText,’String’, ’Vibration’)
end
else
82
A.1. HAPGUI.M
set(handles.modeText,’String’, ’ALL’)
176
177
APPENDIX A. CODE
end
178
179
180
181
182
183
184
185
186
187
188
189
190
191
J.getdata;
if J.buttonVal(2) == 1
set(handles.controlBox,’Value’,
handles.run = 0;
M.normalizedVelocity = 0;
pause(.5);
elseif J.buttonVal(1) ==1
set(handles.controlBox,’Value’,
elseif J.buttonVal(3)==1
set(handles.controlBox,’Value’,
else J.buttonVal(4)==1
set(handles.controlBox,’Value’,
end
0)
1)
1)
1)
192
193
194
195
196
197
198
199
%% Running the controller
if get(handles.controlBox,’Value’)
velocityControl;
else
M.normalizedVelocity =0;
end
200
201
202
203
%% Get Data
set(handles.trialText,’String’, num2str(handles.numTrials));
set(handles.conditionText, ’String’, num2str(handles.conditionNum));
204
205
206
207
208
209
210
if exist(’velocity’, ’var’) == 0
handles.velocity = 0;
end
if exist(’accleration’, ’var’) == 0
handles.acceleration = 0;
end
211
212
213
214
215
216
217
218
219
motorData = M.get_data;
command = handles.command;
handles.oldCommand = command;
oldCommand = handles.oldCommand;
command = J.axisVal(4);
handles.command = command;
%disp(oldCommand)
%disp(command)
220
221
222
223
224
225
handles.oldVelocity = handles.velocity;
handles.velocity = M.motorActualVelocity;
delay = toc;
tic;
jerk = -(command - oldCommand);
83
A.1. HAPGUI.M
226
227
228
229
230
231
232
233
234
235
236
237
APPENDIX A. CODE
handles.acceleration = (handles.velocity -handles.oldVelocity)/(180*delay)*.1+ handles.acceler
% disp(handles.acceleration)
if handles.acceleration > 2
handles.acceleration = 2;
end
if handles.velocity > 50
handles.acceleration = 0;
end
set(handles.positionText, ’String’, M.motorActualPosition);
set(handles.velocityText, ’String’, M.motorActualVelocity);
set(handles.currentText, ’String’, M.motorActualCurrent);
set(handles.frequencyText, ’String’, 1/delay);
238
239
240
thetaTest =[40
120
200
280
360];
[x,bestTheta] = min(abs(M.motorActualPosition-thetaTest));
241
242
243
244
% Figuring out which boxes have been checked
%VisForce = get(handles.VisualForce,’Value’);
%VibForce = get(handles.vibroForce,’Value’);
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
pressure = M.motorActualCurrent;
J.getdata;
yData = J.axisVal(4);
if length(motorData) > 5
if -yData*Kspeed > 0 && motorData(6) > 10
estAccelTorque = 0;
if handles.acceleration
estAccelTorque = 0; %polyval(PolynomialValuesAccel, handles.acceleration)
end
%estVelTorque = polyval(cell2mat(PolynomialValuesDesiredVelocity(bestTheta)), -J.axisVal(4
estVelTorque = polyval(cell2mat(PolynomialValuesActualVelocity(bestTheta)), handles.veloci
if sign(jerk)
estJerk = 50+75*jerk;
else
estJerk = 0;
end
pressure = get(handles.Gain, ’Value’)*(pressure - estAccelTorque - estVelTorque - estJerk
else
pressure = 0;
end
end
267
268
269
270
if pressure > 100
pressure = 100;
end
271
272
273
274
if pressure < 0
pressure = 0;
end
275
84
A.1. HAPGUI.M
276
APPENDIX A. CODE
disp(handles.run)
277
278
279
280
pressureOld = handles.pressure;
handles.pressure = pressureOld*.5 + pressure*.5;
pressure = handles.pressure;
281
282
283
284
285
286
287
%Display data as asked by the system.
if VisForce ==1;
axes(handles.VisBar)
hold on;
forcebarTest(pressure)
end
288
289
290
291
if VibForce == 1 && handles.arduino
fwrite(handles.s1, pressure);
end
292
293
294
currentData(dataSamp,:) = [toc(tStart), pressure, yData(1,1), M.motorActualCurrent(1,1), M.mot
dataSamp = dataSamp + 1;
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
drawnow; % Command needed to have the plot reset
guidata(hObject,handles);
end %End of trial
if handles.numTrials >0
data(handles.conditionNum, handles.numTrials) = mat2cell(currentData);
end
handles.numTrials = handles.numTrials+1;
set(handles.controlBox,’Value’, 1)
handles.run = 1;
dataSamp = 1;
tStart = tic;
currentData = zeros(10000,7);
end % End of condition
handles.numTrials = 1;
w(maxWind) = 0;
handles.conditionNum = handles.conditionNum +1;
end %End of experiment
handles.numTrials = 1;
handles.conditioNum = 1;
save(’testData.mat’, ’data’)
316
317
318
319
320
321
322
323
324
% --- If Enable == ’on’, executes on mouse press in 5 pixel border.
% --- Otherwise, executes on mouse press in 5 pixel border or over VisualForce.
function VisualForce_ButtonDownFcn(hObject, eventdata, handles)
% hObject
handle to VisualForce (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles
structure with handles and user data (see GUIDATA)
325
85
A.1. HAPGUI.M
APPENDIX A. CODE
326
327
328
329
330
331
% --- Executes on button press in closeSerial.
function closeSerial_Callback(hObject, eventdata, handles)
% hObject
handle to closeSerial (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles
structure with handles and user data (see GUIDATA)
332
333
fclose(handles.s1);
334
335
guidata(hObject, handles);
336
337
338
339
340
341
342
343
344
345
346
% --- Executes on button press in stop.
function stop_Callback(hObject, eventdata, handles)
% hObject
handle to stop (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles
structure with handles and user data (see GUIDATA)
handles.run = 0;
M = handles.M;
M.normalizedVelocity = 0;
guidata(hObject,handles);
347
348
349
350
351
352
353
% --- Executes on slider movement.
function slider1_Callback(hObject, eventdata, handles)
% hObject
handle to slider1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles
structure with handles and user data (see GUIDATA)
354
355
356
% Hints: get(hObject,’Value’) returns position of slider
%
get(hObject,’Min’) and get(hObject,’Max’) to determine range of slider
357
358
359
360
361
362
363
% --- Executes during object creation, after setting all properties.
function slider1_CreateFcn(hObject, eventdata, handles)
% hObject
handle to slider1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles
empty - handles not created until after all CreateFcns called
364
365
366
367
368
% Hint: slider controls usually have a light gray background.
if isequal(get(hObject,’BackgroundColor’), get(0,’defaultUicontrolBackgroundColor’))
set(hObject,’BackgroundColor’,[.9 .9 .9]);
end
369
370
371
372
373
374
375
% --- Executes during object creation, after setting all properties.
function Gain_CreateFcn(hObject, eventdata, handles)
% hObject
handle to Gain (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
86
A.1. HAPGUI.M
376
% handles
APPENDIX A. CODE
empty - handles not created until after all CreateFcns called
377
378
379
380
381
% Hint: slider controls usually have a light gray background.
if isequal(get(hObject,’BackgroundColor’), get(0,’defaultUicontrolBackgroundColor’))
set(hObject,’BackgroundColor’,[.9 .9 .9]);
end
87
A.2. POSITIONSTEP.M
A.2
1
2
APPENDIX A. CODE
PositionStep.m
%% positionStep.m
% A script file by Alex J Burtness
3
4
5
6
7
%
%
%
%
An input from the joystick is converted to a specific desired position
for the motor. All the way forward corresponds with completely closed.
All the way back corresponds with completely open. Gently handling is a
must.
8
9
%function [] = positionStep()
10
11
12
13
14
15
16
17
Kspeed = .02;
Ktorque = .1;
dpMax = 30;
dpOK = 3;
%velocity = M.motorActualVelocity;
torque0 = 100;
torque = torque0;
18
19
% Moving to thhe intial position corresponding with Joystick in center.
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
% Running the controller
J.getdata;
clc;
joystickOutput = J.axisVal;
yData = joystickOutput(4);
positionD = 200-200*yData;
motorData = M.get_data;
currentPosition = M.motorActualPosition;
deltaPosition = positionD-currentPosition;
if abs(deltaPosition) <dpOK
velocity = 0;
M.normalizedVelocity = velocity;
else %if abs(deltaPosition) < dpMax
velocity = Kspeed*(deltaPosition)%+velocity*3/4;
M.normalizedVelocity=velocity;
torque = torque0;
%else
%
torque = torque + deltaPosition*Ktorque
end
40
41
42
43
if torque > 100
torque = 100;
end
44
45
M.alexPosition(velocity, torque);
46
47
clc;
48
88
A.2. POSITIONSTEP.M
APPENDIX A. CODE
49
89
A.3. POSITIONRAMP.M
A.3
1
2
APPENDIX A. CODE
PositionRamp.m
%% positionRamp.m
% A functionfile by Alex J Burtness
3
4
5
6
7
%
%
%
%
This controller runs similarly to the position controller, but converts
the input from the joystick into a velocity for the desired position.
After doing so it uses a position controller to reach that desired
position.
8
9
10
11
12
13
14
15
16
17
Kspeed = .02;
Ktorque = .1;
Kposition = 50;
dpMax = 30;
dpOK = 3;
velocity = 0;
torque0 = 100;
torque = torque0;
positionD = M.motorActualPosition;
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
J.getdata;
clc;
joystickOutput = J.axisVal;
yData = joystickOutput(4);
positionD = positionD - yData*Kposition;
motorData = M.get_data;
currentPosition = M.motorActualPosition;
deltaPosition = positionD-currentPosition;
if abs(deltaPosition) <dpOK
velocity = 0;
M.normalizedVelocity = velocity;
else %if abs(deltaPosition) < dpMax
velocity = Kspeed*(deltaPosition);
M.normalizedVelocity=velocity;
torque = torque0;
%else
% torque = torque + deltaPosition*Ktorque
end
37
38
39
40
if torque > 100
torque = 100;
end
41
42
M.alexPosition(velocity, torque)
43
90
A.4. VELOCITYCONTROL.M
A.4
1
2
APPENDIX A. CODE
VelocityControl.m
%% velocityControl.m
% A function file by Alex J Burtness
3
4
% The motor will run in velocity control quite smoothly.
5
6
7
8
9
10
11
Kspeed = .4; %Increases the gain of the speed
J.getdata; %Asks the GamePad to update data
joystickOutput = J.axisVal;
yData = joystickOutput(4); %Velocity is controlled with the Right Joystick
M.normalizedVelocity = -yData*Kspeed; %Setting the normalized velocity will start the velocity
clc
91
A.5. FORCEBAR.M
A.5
1
2
APPENDIX A. CODE
Forcebar.m
%% forcebarTest.m
%
3
4
5
6
7
8
9
10
11
12
13
14
15
16
function [] = forcebarTest(value)
if value <= 0
value = 1;
elseif value > 100
value = 100;
end
hsvflip = flipdim(hsv(300),1); %Selects the HSV color map, but upside down.
hsvmid = hsvflip(length(hsvflip)*.66:length(hsvflip),:);
value = ceil(value);
cla
bar(value)
%axis([.9,1,0,100])
colormap(hsvmid(value,:));
17
18
end
92
A.6. CALIBRATIONRUN.M
A.6
APPENDIX A. CODE
CalibrationRun.m
1
2
3
4
5
6
7
8
9
10
11
12
13
%% CalibrationRun.m
% A script file by Alex J Burtness
%
% Created: 15OCT10
% Last Update: 19OCT10
%
% THIS CODE NEEDS ADDITIONAL STITCTION MODELING AND COMMENTS
%
% This file automatically runs the hand through a series of motions in
% order to determine the amount of toruqe that is needed to move with
% constant velocity. Having done that, it finds how much torque is needed
% to accelerate the motor.
14
15
16
17
18
19
20
Torque = 0;
OmegaDesired = 0;
OmegaActual = 0;
Theta = 0;
torqueF = 0;
torqueR = 0;
21
22
23
24
25
26
27
28
29
30
indexF = 1;
indexR = 1;
%% Going to the correct starting point
M.normalizedVelocity = -.1;
while M.motorActualPosition > 40
M.get_data;
M
end
M.normalizedVelocity = .1;
31
32
33
34
35
36
while M.motorActualPosition < 40
M.get_data;
M
end
M.normalizedVelocity = 0;
37
38
39
%%
for omega = .05:.05:1;
40
41
42
43
tic
time = toc;
M.normalizedVelocity = omega;
44
45
46
47
48
while M.motorActualPosition < 365
motorData = M.get_data;
torqueF(indexF) = M.motorActualCurrent;
omegaDesiredF(indexF) = omega;
93
A.6. CALIBRATIONRUN.M
omegaActualF(indexF) = M.motorActualVelocity;
thetaF(indexF) = M.motorActualPosition;
indexF = indexF +1;
49
50
51
52
53
APPENDIX A. CODE
end
M.normalizedVelocity = 0;
54
55
56
57
58
59
60
61
62
63
64
65
66
tic;
time = toc;
M.normalizedVelocity = -omega;
while M.motorActualPosition > 35
M
motorData = M.get_data;
torqueR(indexR) = M.motorActualCurrent;
omegaDesiredR(indexR) = -omega;
omegaActualR(indexR) = -M.motorActualVelocity;
thetaR(indexR) = M.motorActualPosition;
indexR = indexR +1;
end
67
68
M.normalizedVelocity = 0;
69
70
71
72
73
74
%Torque = [Torque; [torqueF; torqueR]]
%omegaDesired = [omegaDesired; [omegaDesiredF,omegaDesiredR]];
%omegaActual = [omegaActual; [omegaActualF,omegaActualR];
%Torque = [Torque;mean(torqueF);mean(torqueR)]
%Omega = [Omega;omega;-omega]
75
76
end
77
78
%% Get Acceleration Data
79
80
81
82
83
84
85
86
% Go to correct starting point
M.get_data
M.normalizedVelocity = -.2;
while M.motorActualPosition > 40
M.get_data;
end
M.normalizedVelocity = .2;
87
88
89
90
while M.motorActualPosition < 40
M.get_data;
end
91
92
93
94
95
96
97
98
maxAccel = 4;
dAccel = .2
desiredVelocity = 0;
indF = 1;
indR = 1;
for acceleration = .05 :dAccel:maxAccel
acceleration
94
A.6. CALIBRATIONRUN.M
APPENDIX A. CODE
M.normalizedVelocity = 0;
pause(.1)
tic
while M.motorActualVelocity < 180 && M.motorActualPosition < 360
dTime = toc;
tic
desiredVelocity = desiredVelocity + acceleration*dTime;
M.normalizedVelocity = desiredVelocity;
pause(.01)
M
M.get_data;
accelDataF(indF) = acceleration;
accelVelF(indF) = M.motorActualVelocity;
accelPosF(indF) = M.motorActualPosition;
accelTorqueF(indF) = M.motorActualCurrent;
indF = indF +1;
end
M.normalizedVelocity = 0;
M.get_data;
desiredVelocity = 0;
pause(.1)
tic
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
while M.motorActualVelocity < 180 && M.motorActualPosition > 40
dTime = toc;
tic
desiredVelocity = desiredVelocity - acceleration*dTime;
M.normalizedVelocity = desiredVelocity;
pause(.01)
M.get_data;
M
accelDataR(indR) = -acceleration;
accelVelR(indR) = -M.motorActualVelocity;
accelPosR(indR) = M.motorActualPosition;
accelTorqueR(indR) = M.motorActualCurrent;
indR = indR +1;
end
desiredVelocity = 0;
M.normalizedVelocity = 0;
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
end
139
140
%% Data Manipulation
141
142
close all;
143
144
145
146
147
148
figure(1)
xlabel(’blah’)
title(’blah’)
divisions = 8;
divisor = 400/divisions
95
A.6. CALIBRATIONRUN.M
APPENDIX A. CODE
149
150
151
152
omegaDesiredFTheta = [omegaDesiredF’, ceil(thetaF’/divisor)];
omegaActualFTheta = [omegaActualF’, ceil(thetaF’/divisor)];
torqueFTheta = [torqueF’, ceil(thetaF’/divisor)];
153
154
155
156
157
omegaDesiredRTheta = [omegaDesiredR’, ceil(thetaR’/divisor)];
omegaActualRTheta = [omegaActualR’, ceil(thetaR’/divisor)];
torqueRTheta = [torqueR’, ceil(thetaR’/divisor)];
158
159
160
161
162
163
% Organize data into groups of theta with velocity and acceleration
for i = 1:divisions
omegaDesiredFThetaCut = omegaDesiredFTheta(find(omegaDesiredFTheta(:,2)==i),1);
omegaActualFThetaCut = omegaActualFTheta(find(omegaDesiredFTheta(:,2)==i),1);
torqueFThetaCut = torqueFTheta(find(torqueFTheta(:,2)==i),1);
164
165
166
167
omegaDesiredRThetaCut = omegaDesiredRTheta(find(omegaDesiredRTheta(:,2)==i),1);
omegaActualRThetaCut = omegaActualRTheta(find(omegaDesiredRTheta(:,2)==i),1);
torqueRThetaCut = torqueRTheta(find(torqueRTheta(:,2)==i),1);
168
169
170
171
172
173
174
175
176
177
178
179
180
omegaActualFnoA = 0;
omegaDesiredFnoA = 0;
torqueFnoA = 0;
maxAccel = 3;
% Organize data into groups without acceleration and outliers
for j = 2:length(omegaDesiredFThetaCut)
if abs(omegaActualFThetaCut(j-1) - omegaActualFThetaCut(j)) < maxAccel && torqueFTheta
omegaActualFnoA = [omegaActualFnoA, omegaActualFThetaCut(j)];
omegaDesiredFnoA = [omegaDesiredFnoA, omegaDesiredFThetaCut(j)];
torqueFnoA = [torqueFnoA, torqueFThetaCut(j)];
181
end
182
183
end
184
185
186
187
188
189
190
191
192
193
194
order = 6;
if length(torqueFnoA) < 100
order = 3;
end
vandyActual = [];
vandyDesired = [];
for j = 0:order
vandyActual = [vandyActual, (omegaActualFnoA’).^j];
vandyDesired = [vandyDesired, (omegaDesiredFnoA’).^j];
end
195
196
197
198
PolynomialValuesActualVelocity(i,1) = mat2cell(flipud(pinv(vandyActual)*torqueFnoA’))
PolynomialValuesDesiredVelocity(i,1) = mat2cell(flipud(pinv(vandyDesired)*torqueFnoA’))
pseudoOmega = 0:1:max(omegaActualFnoA);
96
A.6. CALIBRATIONRUN.M
APPENDIX A. CODE
pseudoOmegaDesired = 0:.01:max(omegaDesiredFnoA);
pseudoTorqueActual = polyval(cell2mat(PolynomialValuesActualVelocity(i,1)), pseudoOmega);
pseudoTorqueDesired = polyval(cell2mat(PolynomialValuesDesiredVelocity(i,1)), pseudoOmegaD
199
200
201
202
figure(1)
subplot(divisions/2, 2, i)
plot(omegaActualFThetaCut,torqueFThetaCut,’.’)
axis([0,200,0,100])
203
204
205
206
207
figure(2)
subplot(divisions/2,2,i)
plot(omegaActualFnoA, torqueFnoA, ’.’)
hold on
plot(pseudoOmega, pseudoTorqueActual)
208
209
210
211
212
213
214
215
216
end
217
218
219
%% Accel Data Manipulation
220
221
222
223
224
225
226
227
228
229
230
231
232
233
%figure
%plot3(accelDataF, accelVelF, accelTorqueF)
thetaTest = (1:divisions)*400/divisions - 400/(divisions*2)
ind = 1;
order = 6;
for i = 1:length(accelPosF)
[x,j] = min(abs(accelPosF(i)-thetaTest))
if accelVelF(i) < 30 && accelTorqueF(i) > 0
accelLowVel(ind) = accelDataF(i)
accelTorqueLowVel(ind) = accelTorqueF(i)
accelTorqueLowVelNoVel(ind) = accelTorqueLowVel(ind) - polyval(cell2mat(PolynomialValu
ind = ind +1;
end
234
235
236
237
238
239
accelTorqueNoVelAct(i) = accelTorqueF(i) - polyval(cell2mat(PolynomialValuesActualVelocity
end
%figure
plot(accelDataF(1:length(accelPosF)),accelTorqueNoVelAct)
vanderLowVelNoVel = [];
240
241
242
243
244
245
246
for i = 0:order
vanderLowVelNoVel = [vanderLowVelNoVel, accelLowVel’.^i]
end
PolynomialValuesAccel = flipud(pinv(vanderLowVelNoVel)*accelTorqueLowVelNoVel’)
pseudoAccel = 0:.01:max(accelLowVel)
pseudoAccelTorque = polyval(PolynomialValuesAccel, pseudoAccel)
247
248
figure
97
A.6. CALIBRATIONRUN.M
249
250
251
252
APPENDIX A. CODE
plot(accelLowVel, accelTorqueLowVel,’.’)
hold on;
plot(pseudoAccel, pseudoAccelTorque)
plot(accelLowVel, accelTorqueLowVelNoVel,’o’)
253
254
save VOID PolynomialValuesActualVelocity PolynomialValuesDesiredVelocity PolynomialValuesAccel
98
A.7. ARDUINO CODE - SERIALREADWRITE.PDE
A.7
1
APPENDIX A. CODE
Arduino Code - SerialReadWrite.pde
#include "WProgram.h"
2
3
/* SerialReadWriteTest
4
5
6
7
A sketch by Alex J Burtness
Created: 30APR10
Last Update: 19OCT10
8
9
This file opens a serial connection and waits for a
10
11
*/
12
13
14
15
16
17
18
19
20
//GLOBAL CONSTANTS
int C2 = 11;
int C3 = 10;
int aPin = 0;
float pressure = 0;
float accel = 0;
float pi = 3.14159;
float e = 2.71828;
21
22
23
24
25
26
27
//GLOBAL VARIABLES
long randomvalue = 0; // random value
long countervalue = 0; // counter value
int serialvalue; // value for serial input
int started = 0; // flag for whether we’ve received serial yet
long time;
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
//SETUP (RUN ONCE)
void setup()
{
pinMode(C2,OUTPUT);
Serial.begin(9600);
}
//LOOP (RUN WHILE(1))
void loop()
{
if(Serial.available()) // check to see if there’s serial data in the buffer
{
serialvalue = Serial.read(); // read a byte of serial data
if (serialvalue == ’p’)
{
time = micros(); // Parsing through data
accel= AccelReading();
pressure = PressureReading();
Serial.print(time);
Serial.print(’/’, BYTE);
99
A.7. ARDUINO CODE - SERIALREADWRITE.PDE
Serial.print(accel);
Serial.print(’/’,BYTE);
Serial.print(pressure);
Serial.print(’/’, BYTE);
Serial.print(’\n’, BYTE);
49
50
51
52
53
54
55
56
57
58
59
60
APPENDIX A. CODE
}
else // If there is no info in the buffer, read the data
{
serialvalue = int(serialvalue*2.15 + 40); // Pressure value from MATLAB
analogWrite(C2,serialvalue); // Vibrotactor
analogWrite(C3,serialvalue); // Vibrotactor
}
61
62
}
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
//if(started) { // loop once serial data has been received
// Serial.println(serialvalue); // echo the received serial value
// Serial.println(); // print a line-feed
// delay(100); // pause
//} */
//}
}
//DEFINE METHODS
float PressureReading() // Used to mimic a sinusoidal pressure signal
{
time = micros();
float pressure = .5+ .5*sin(time*pi/4000000);
return pressure;
}
78
79
80
81
82
83
float AccelReading() // Used to mimic a periodic decaying signal
{
time = micros() % 3000000;
float accel = abs(pow(e,-time/100000)*sin(time*pi/10000));
return accel;
84
85
}
100
Appendix B
Data Sheets
101
!"#$%%&#'( ) ) ( *+,"
(*0 1211 (" +
5 18 95
1 7 ) 1 :<<:
#
!#
/:
($%%& /
5
)( (341('5+
(
1) ( .
( (.
!( ( ((7
= ) (.
> > (? <
!< < ( .61
? <
!<( .1
( './
() ./ (! (11 ;(1
) (; 1 ( 1!(!( (5(
( ' -$,"
1(5 *+46( ( 5 7
4
@1 A
/ 1A "
*+,
BA
' C<*$A
/ 1A " ('
+<$%A
/2@ (
*0" +1! 341'
+
/ 1 (
:: 1/2@
0%
:: -.-A
B%
!"
( 4
*+D" *+,'-$D" -$,' $D( ))
4
*D" *+,'$D" -$,'
>>@4
B*$)(" *+,'*D" -$,'
: 91
*+46
>=
!
)1 ! 7
" <7'1
:<<:
1"
1 $.* <1(!1 ) E(18 9.
( @3>
)
=
1(11 . 1(((
.
1
!! (
1
)
) (
)
A 1
.
=
(11 +$%! (./(11 ((
) ) ( ) ./(
< ') .
( )
CA5 !5 BA1 (11 ((
*$A5 ! !
) .
(C*$! (.
11 ( ( (?
1! ) E((
1('.F
=
1(" (11( B! ( 7
(11 ! (1 55(11 ! ! 1
8 95 (( (1 .
1(11 ( 1
1 ( ) . (
A/G!
BA(11 .
<) 5)(11 )7
-.-! (11 ) <) / 1.4 = (B%.
1 (.
*+,
(*+D ( ( " $D(( )
D5" ( $D( )
'. *+,
>>@4 ) '; -$,
(*D4
($D4
'; -$,
B*$)(>>@4"
(-$
)
*D>>@4.
> *0 1 (
!
(. 1 B! (.> 1
" (
)( (
) '$%<B%9@ (./
5(1 (
7( !"H'
(1
(
"'
1 <1((
!(1 6 (?
("H'(
. (1 (
(1 (
)
1 ! 5 ( 5
! . / 1"' (.
!"#$$%$&$$ ! ,<)341
!
1 ( /7<< 1.
115( 1 4 "'5 3"'5
1! ! =0%
3"' .
$#' $ # '$ () (1 ((11/ 5 5 1! )
5(
* ( ) < >
.
1 *-.3 1 (/ ! 5 >( 5 1 (@35
E(.
!
(+
(
1(5 1! *%)(( "..*%$0 ! ('.
B! (5 (1(()
"' .
5(1 (
11
!(1 6
( >1
?
(+ , (*11/$:"3/' ( 3 ) .
1 1 ( ) ?
,- !
1(.7(
( @3( .1
"'.
(
() ( ( ) 9
) .
!"
( 11 ) 1 (
*+,1(.
!
*+,
( ) ( 15
5 (.
-$,1! 7"BA'( 5 ( ! ) 1 (%"H'
/$-$ )
*"H'.
/ !("
( (( !7
( '1! ! 1( 1. ( ( ( (
(1 = )(
) . H
H>( ) (
( ! / 1
7 1") ( 1 (%
()
*'.
) ((
*+,
/ . ( ( 3 )
-$, ((11/$:"3/'
!E( 1 (.
(1 ( /$:)(;( (.( / 51 (( *+,
-$, ( .
!
)1 ( "
!(1) )
*+, -$,
(
=
'. (5(
1 ./ ((
(.
(1
DB%%1 " 5:
('.
F
()1 (( )
1 /:"/ <: '
;(
( ( ( (.
!
I 1 ( 1(( () )
)()(
1
1.@
( *+, -$,! *%%
5
!( (
) <
((1
( 1
( 1 (.3
) ! . (
(
!(
1
)(
! (
<
(
"..
<(
)( (
(1 ./ (9
(( (5 9(
1
( "! 7'.
( )
)
4 @H =5((
(5 )
1
(
'5 1
) !( <
( ( ( (
.
) ( ) <(. 1
( (
) ./E( ) #>><>G#.F () ) ( ) <()
BA ( ;( (
)(1
! ( 5 (
.
!.3 (1
()(
(
(
1 .3 ( ( (( " 9 '5
( 1( ( 1. ( (( ( 1 ) 1
1(( 1
("' $-$(
)(
**% ((
(.
"#
!
( ( ) 1 (
(1(1!
715 (
1(1E(71(( (
1 5 (1! (
) 9
! .
= 1 ./
( !
B%%( 11
(! .
=
8 9=
(
)
!: $.C
$.* ((1! 5 7
1
( . ( ( ) ) ( (.G
) 1 (C
,(*+% "%.*+#'5
! 1 *%% (1 1 (.
$
((
@
1 ) E(
11/
5
(
)((?3A5
54-5:534
> ( ) A :
(
!"
Acceleration
Piezotron® Accelerometer
Type 8694M1
Miniature, Wide Frequency Response, Voltage Mode Triaxial Accelerometer
Light 2,5 gram weight triaxial accelerometer that simultaneously
measures vibration in three, mutually perpendicular axes (x, y and
z). Designed primarily for measurement applications requiring a
high frequency response capability in all three axis.
•
•
•
•
shrink tubing
epoxy
Low impedance voltage mode
Small size and lightweight, less than 2,5 grams
Quartz sensing element
Conforming to CE
10,5
3,3
output cable
4-pin neg.
ø5,1
Description
The triaxial accelerometer Type 8694M1 consists of three individual sensor elements mounted in an orthogonal configuration
with each containing a preloaded quartz-crystal measuring assembly, a seismic mass, and a miniature hybrid Piezotron electronics. The signal conditioning circuit converts the charge developed in the quartz elements as a result of the accelerometer
being subjected to a vibration, into a useable high level voltage
output signal at a low impedance level.
8694_000-237e -05.08
Since the Type 8694M1 is a triaxial accelerometer, each sensor
axis requires individual excitation power and signal processing.
Kistler’s 5100 Piezotron coupler series includes a wide selection
of single and multichannel units that include both gain and frequency tailoring. Industry standard voltage mode IEPE (Integral
Electronic Piezo-Electric) power supply/couplers can also be used
with the accelerometer.
Application
The accelerometer Type 8694M1 is well suited for measuring
dynamic acceleration, vibration and shocks in applications where
minimum mass, small mounting size, and high resonant frequency
are essential. The dynamic characteristics of very light test objects
are practically not influenced by the accelerometer’s small mass.
The triaxial accelerometer is ideal for measuring acceleration vectors in space, vibration measurement on thin-walled structures,
aircraft and automotive structures and general vibration measurements.
10,8
10,5
pin 2
ground
pin 1
X axis
pin 4
Y axis
pin 3
Z axis
Mounting
The accelerometer Type 8694M1 can be attached to the test
surface by using wax, or adhesive. Reliable and accurate measurements require that the mounting surface be clean and flat.
The operating instruction manual for the accelerometer Type
8694M1 provides detailed information regarding mounting surface preparation.
Adhesive mounting is recommended for the widest transfer of
frequency information, but double-sided adhesive tape or wax
may also be used. When using the anodized adaptor, Types 8439
or 8440, the accelerometer will be ground isolated from the test
object.
The recommended adhesives, to be placed between the accelerometer and the object or a ground isolated mounting pad,
include:
•
•
•
•
Petro wax, Type 8432
Loctite 430: general use between metals
Loctite 495: general use between other materials.
3M Scotch Weld 1838: high temperatures, above 165 °C
Note: Removal of this substance is extremely difficult and care
should be exercised when removing the accelerometer.
Page 1/2
This information corresponds to the current state of knowledge. Kistler reserves the
right to make technical changes. Liability for consequential damage resulting from
the use of Kistler products is excluded.
©2008, Kistler Group, Eulachstrasse 22, 8408 Winterthur, Switzerland
Tel. +41 52 224 11 11, Fax +41 52 224 14 14, info@kistler.com, www.kistler.com
Piezotron® Accelerometer – Miniature, Wide Frequency Response, Voltage Mode Triaxial Accelerometer,
Type 8694M1
Technical Data
Specification
Unit
Acceleration range
g
Acceleration limit
gpk
Threshold nom. (noise 100 µVrms)
Sensitivity, ±5 %
grms
mV/g
Included Accessories
• Mounting wax
Type
8432
Optional Accessories
• Mounting adapter with M3 thread
• Mounting adapter with 4-40 UNC thread
Type
8439
8440
Type 8694M1
±500
±1 000
0,025
4
Resonant frequency mounted, nom.
kHz
Frequency response, ±5 %
Hz
80
Amplitude non-linearity
%FSO
±1
Time constant, nom.
s
0,5
Transverse sensitivity, nom.
%
<5
10 … 20 000
Ordering Key
Type 8694
Range
±500 g
M1
Environmental
Random vibration, max.
Shock limit (1 ms pulse)
grms
gpk
Temperature coefficient of sensitivity %/°C
±2 000
±2 000
–0,05
Operating temperature range
°C
–196 … 135
Storage temperature range
°C
–195 … 150
Output
Bias, nom.
VDC
Impedance
Ω
4
Measuring Chain
1 Low impedance sensor
2 Sensor cable, 4-pin pos. to (3x) BNC pos.
3 Power supply/signal conditioner
4 Output cable, BNC pos. to BNC pos.
Type
8694M1
1576...
51…
1511
25
Voltage full scale
V
±2
Current
mA
±2
1
2
3
4
Readout
(not supplied)
Source
Voltage
VDC
Constant current
mA
12 … 30
4
Impedance, min.
kΩ
100
Construction
Sensing element
Type
Case/base
material
Degree of protection case/connector
quartz-compression
titanium
IP66
(EN 60529)
Connector
Type
Ground isolated
4-pin neg. int.
with pad
Mass
grams
Mounting
Type
2,5
adhesive/wax
8694_000-237e -05.08
1 g = 9,80665 m/s2, 1 Inch = 25,4 mm, 1 gram = 0,03527 oz, 1 lbf-in = 0,113 N∙m
Page 2/2
This information corresponds to the current state of knowledge. Kistler reserves the
right to make technical changes. Liability for consequential damage resulting from
the use of Kistler products is excluded.
©2008, Kistler Group, Eulachstrasse 22, 8408 Winterthur, Switzerland
Tel. +41 52 224 11 11, Fax +41 52 224 14 14, info@kistler.com, www.kistler.com
Vita
Alex J. Burtness was born on March 16th , 1987 in Minneapolis, Minnesota. He
later moved to Portland, Oregon and attended Sunset High School where he graduated in 2005. Following his graduation he accepted an appointment to the United
State Naval Academy and was sworn in as a Midshipman on June 28th, 2006. At the
Academy he majored in Systems Engineering and graduated with Honors and Distinction. In 2010 he was commissioned as an Ensign in the United States Navy and
was accepted into the training pipeline to become an Explosive Ordnance Disposal
Officer.
Following his commissioning, Alex was given orders to attend the Johns Hopkins
University to finish his graduate studies in Mechanical Engineering while doing research for the Navy Explosive Ordnance Disposal Technology Division. In February
2011, Alex will report to the Naval Diving and Salvage Training Center in Panama
City where he will begin training to receive his qualification as an Explosive Ordnance
Disposal Officer.
112