Computer Vision Module Application For Finding A Target in A Live Camera
Computer Vision Module Application For Finding A Target in A Live Camera
in a live camera
AIM OF INTERNSHIP
An Internship chance for a variety of benefits for young workers who want to broaden their
chances for landing a job and jump-starting their careers. Internships give you a taste of what
a profession is like , help you build your resume and let you meet people who can help you in
your career. Don’t be passive during an internship and miss opportunities to expand your
business background. Take advantage of the many benefits of holding an internship.
The purpose of the internship is to provide an opportunity to seek, identify and further
develop an appropriate level of professionalism. An internship assist with career development
by providing real work experience that provide student with opportunities to explore their
interest and develop professional skills.
Advantage of Internship
Internship provides students numerous perks: They gain experience, develop skills, make
connections, strengthen their resumes, learn about a field, and assess their interest and
abilities. Offering a paid internship is particularly beneficially because it enables
economically disadvantaged youth to participate.
.
ABSTARCT
This approach aims to develop a generalized framework for vision-based multiple ground target
finding and action using a low-cost UAV system. The framework can be effectively deployed on a
variety of UAV application with the suitable detection module in fully and supervised autonomous
missions such as Search and rescue, inspection of flight debris, and spot spraying application of
weedicide/pesticide. The developed framework was verified using Software in the Loop (SITL)
simulation and outdoor flight tests. Results show that the framework is capable to perform multiple
target finding and action task in real-world conditions.
INTRODUCTION
Unmanned Aerial Vehicles (UAVs) have been the primary research focus in the recent past, and
civilian use of UAVs is continuously increasing. Aerial imagery, remote sensing, and mapping are
some of the popular civilian uses of UAVs. Vision-based target tracking and following is a useful
task in multiple applications such as search and rescue [1], environmental monitoring [2], and
infrastructure inspection [3]. Vision-based autonomous landing is a related area to our work. Usually,
a single pre-defined target, mostly with specific patterns to ease the detection [4-6] is employed in
vision-based landing tasks. Multiple techniques including Image-Based Visual Servoing (IBVS) and
the relative distance based [7] methods are popular. However, the applicability of these techniques is
still under development when there are multiple visually similar targets. For instance, IBVS need a
visually unique feature to work. Detection and tracking of a moving target [8-11] are also explored.
For example, Greatwood et al. [12] and Vanegas, et al. [13] demonstrated the tracking of a pre-
defined target. They used a predefined pattern fixed with a rover to track that RC rover. Tracking of a
user-selected object also was demonstrated by Cheng et al. [14].
The authors used a UAV fixed with a gimbaled camera and a Kernelized Correlation Filter (KCF)-
based visual-tracking scheme for tracking the selected object. Authors claim their strategy is robust to
the full and partial occlusion of the tracked object. Zarudzki et al. [15] demonstrated the Image-based
multiple target tracking. The authors employed an IBVS scheme, which uses the color of each target
for tracking and control. The applicability of the IBVS schemes is limited when there are multiple
visually similar targets. In some other related work, autonomous vision-based navigation techniques
have been studied for Power line inspection [16], water channel following mission [17], and
navigation in orchards [18]. In these works, the general strategy is to follow the continuous image
features such as lines present in the observed structures. Therefore, these strategies are not applicable
to a set of discrete targets studied in our work. Alsalam et al. [19], Choi et al. [20], and Hinas et al.
[21] have demonstrated the vision-based guidance systems for target finding and action on the single
ground target. In our work, the task involves detection, navigation, and action of UAV on a set of
discrete visually identical ground targets. Therefore, tracking continuous image features such as line
tracking is not applicable. The targets go in and out of the camera Field of View (FOV) due to the
requirement of the task.
IBVS schemes and relative distance-based techniques are not useful since the targets are visually
similar and go in and out of the camera Field of View (FOV).
In this project, we present a framework for vision-based target finding and action on multiple ground
targets. The framework uses a vision-based target detection algorithm and a position based target
tracking pipeline using the Observe, Orient, Decide, and Act (OODA) decision loop [14] for
searching and performing an action on multiple ground targets. The detection and tracking pipeline
detect and track the targets only by their color and position. The features, shape, and size of the
targets, are not used. Therefore, the framework can track similar and non-similar targets as long as the
detection stage detects the targets.
MODULES
Filtering
Analysis of the experimental data indicates that the image based target position estimation error
increases with the UAV rotation rate. Therefore, the images captured when the UAV rotates in a high
rotation rate are discarded.
Detection
The Detection stage performs target detection and also extracts the center of the detected targets to be
used in the Position Estimation stage. In our previous work [21], we used Hough transform using
OpenCV HoughCircles function for detection. However, it takes longer to process and the time
increases with the number of targets. The processing rate of that algorithm falls well below 10
frames/s for multiple targets in a Raspberry Pi 3.
Position Estimation
In this stage, the position of each target is estimated using a pinhole camera model, and the position
of the UAV is used as an input to this stage. The Pixhawk autopilot uses barometer measurement to
calculate the altitude (Z position). The barometer is usually affected by the wind. The altitude
variation in low altitude flight such as hovering close to the target is problematic. Alternatively, an
ultrasonic sonar can provide accurate distance measurements up to a few centimeters in precision.
H/W SYSTEM CONFIGURATION:-
(Fig-1.1)
(Fig-1.1.2)
1.1.3 Vision:
Our aim is to be a sustained, long term partner of our clients for a ‘hands-free’
Quality assurance responsibility (QA).
To be a global organization of repute, delivering world class Quality Assurance
services and education, to enable out client’s business success.
1.1.4 Mission: