Discours
Discours
Discours
:
Good morning ,
Mr the president of the jury (assmo ) and its member , mr (assmo) , our supervisors (assmawathom) ,
welcome to the thesis defense which holds the theme of “Design and implementation of a
forest fire detection and localization system using UAVs” .Presented by “ Bouzidi Farah and Charef
wided “ , students of aeronautics , avionics department .
We would like to thank you all for your attendance and for showing interest to our work .Thanks
goes to our families and to our far coming friends and to whoever attended our presentation .
Workplan :
As a start , we are going to present our work plan which includes :
Introduction:
Over the last decades , environmental changes are large , frequent natural disasters , to the survival
of the natural environment has caused a great impact , and caused significant economic losses .For
an example , forest fires are one of them , to a certain extent , affect people’s living , as happened in
Algeria for many years .
Forest fires , are highly complex ,not structured environments where the use of multiple sources of
information at different locations is essential .Wildfires represents an appropriate scenario for the
UAV capabilities and performance in fire detection , localization and prevention .
Due to that , it Is necessary to provide a UAV based forest fire detection and localization system ,
which relies on developed approaches such as : computer vision object detection algorithms based
on convolutional neural network with aerial images database for the training , stereo-vision
localization system with depth estimation , along with a designed and controlled PIXHAWK quad
copter.
The given solutions to the dilemma we introduced are minor portion of bigger world that allows us to
serve humanity an d save lives with low cost and applicable methods .
Literature Reviews :
Review on Vision Based Automatic Forest Fire Detection Techniques :
1. DeepFire: A Novel Dataset and Deep Transfer Learning Benchmark for Forest Fire Detection,
Ali Khan, Bilal Hassan,Somaiya Khan,Ramsha Ahmed, and Adnan Abuassba, April 2022.
Summarry : proposed a robust forest fire detection system that requires precise and efficient
classification of forest fire imagery against no-fire. They had created a novel dataset (DeepFire)
containing diversified real-world forest imagery with and without fire , a training based on VGG-19
transfer learning to achieve improved prediction accuracy was made with a comparison of the
performance of several machine learning approaches.
2. Forest Fire Detection and Prediction from image processing using RCNN, Abhay Chopde ,
Ansh Magon, Shreyas Bhatkar , 2022
Summary : proposed a deep-learning-based forest fire detection model that can detect the forest fire
from satellite images , Due to the limitations of the CNN model, RCNN has been used to reduce the
prediction time. The model was able to differentiate between images with and without fire with an
accuracy of 97.29%.This approach focuses on observing the forests without steady human
supervision.
1. FOREST FIRE DETECTION USING DRONE , Dr. S. Mary Praveena, B.Akshaya, B.B.Devipriya,
U.Divya, K.Mirudhula, 2021.
Summary : proposed to develop a real-time forest fire monitoring system employing a UAV and
remote sensing technique. The drone is provided with sensors, a mini processor and camera. Data
from different sensors and therefore the images taken by the camera are processed on-board.
1. UAV Mapping with Semantic and Traversability Metrics for Forest Fire Mitigation ,
David Jacob Russell, Tito Arevalo-Ramirez, Chinmay Garg, Winnie Kuang, Francisco
Yandun, David Wettergreen, George Kantor, Mars 2022.
Summary : In this work researchers proposed a unmanned aerial vehicle (UAV) that can map a region in order
to clear fuel to prevent the spread of fire . they developed a multi-sensor payload consisting of cameras, LiDAR,
GPS with onboard processing and SLAM system to understand the 3D structure of the environment and a
semantics system to identify fuel and other features in the environment .This approach provides a 3D map of the
environment and georegistered maps describing the locations of fire .
2. A Vision-Based Detection and Spatial Localization Scheme for Forest Fire Inspection
from UAV, Kangjie Lu,Renjie Xu,Junhui Li,Yuhao Lv,Haifeng Lin and Yunfei Liu, februrary 2022 .
Summary : this paper presented a lightweight model, NanoDet, which was applied as a detector to identify and
locate fires in the vision field. After capturing 2D images with fires from the detector, the binocular stereo vision is
applied to calculate the depth map, where some algorithms were proposed to eliminate the interference values
when calculating the depth of the fire area.
3. Design, Modelling, Localization, and Control for Fire-Fighting Aerial Vehicles, Dimitris
Chaikalis; Nikolaos Evangeliou; Anthony Tzes and Farshad Khorrami, 2022
Summary : In this paper, the overall dynamic model of the system is developed . The aerial vehicle is designed
with vertical and lateral rotors. A model-based controller is developed to guarantee stable flight of the aerial
vehicle. A localization method is proposed, where the aerial vehicle’s relative position with respect to its base is
computed by including a force sensor on the vehicle .
Methodology :
Fire detection using computer vision :
Traditional forest fire recognition algorithm depends heavily on man-made flame characteristics,
such as color, texture, shape and motion ….However, there are many missed and false detection
cases due to the interference under the circumstances of complex and changeable forest scenes. In
recent years, convolutional neural networks (CNN) have been widely used in the image field .Unlike
traditional manual feature extraction methods, CNN can automatically learn features through
convolution operation with more robust generalization performance and higher recognition
accuracy, and were used widely in forest fire detection.
Data selection :
A large amount of forest fire aerial videos and images are available on the internet , we have chosen
the most close environment to our local Terrain .So , we extracted images from the FLAME dataset of
a 2906 images containing flames to be labeled .
Data preprocessing:
1. Image labeling : Manual labeling by creating bounding boxes over the Area that represents
fire from the upper left corner , which is the region of interest , the labeling was made using
MATLAB APP image labeler
2. Data splitting : the algorithm uses random splitting to divide the dataset into 2 subset which
includes the TRAINING DATA which is used for network learning , and a TEST DATA to extract
the accuracy of the used method when the training ends .after several training computation
and lot of research we ended up the most efficient data splitting percentage to insure high
algorithm performance with 80% for the training data and 20% for the TEST DATA .
3. Data augmentation : this step is very important due to the necessity of big data for in deep
learning use. Augmentation is applied to enlarge the data-set and expose the neural network
to a wide variety of variations of images.The used methods are :
Jitter image color : changing in HUE , Contrast and saturation .
Random horizontal flipping : the idea of chosen the random horizontal flip came
since the fire is generally burning upwards
Random X/Y scaling: The image is scaled outward and inward along X/Y respecttively.
An object in new image can be smaller or bigger than in the original image by
scaling.
Results discussion :
in object classification , many metrics are used in order to specify the algorithm performances , in
order to build a comparison between each of the fourth algorithms we trained we had as a result :
1. confidence score:
The confidence score is predicted by the classifier and represents the probability that there is an
object in the anchor box.
Yolov2 to the left : as it is showing the confidence score is between 0.56 and 0.76
Yolov4 to the right : the confidence score is between 0.55 and 0.98
SSd : had a problem of overlapping and low confidence rate that went under 0.5 .
2. Average precision :
In object classification, the average accuracy (AP) is the most commonly used metric as it assesses
the overall accuracy of the model. The AP measures how accurate the model is on a specific label
(class). For all models, we want to calculate the AP for the label ”Fire”, which is based on the
precision-recall curve.
Using all the predictions for detecting fire in images, a smooth precision × recall curve was built.
(This curve established the compromise between the recall rate and the precision rate, with the
evolution of the prediction confidence score .Higher confidence rates tend to have higher precision
in their predictions, but a lower recall rate. both yolo models, had near a 100% recall rate, but in this
stage, the precision was near 85%. The best performing model was the one with the highest Area
Under the Curve (AUC) . Therefore, from Figures, YOLOv4 Tiny and YOLOv2 had similar results and
were the best performing models. However, the low precision at higher recall rates and the lower
total recall mean that the models have much prediction noise and many false positives (knowing that
SSD ResNet50 has the worst results). Therefore, while considering all the model predictions, using
the confidence score , the AP as a balanced metric between the recall and the precision, and to the
training time added , YOLOv4 was the best performing model.
CHAPTER 04
Slide 45:
Lastly we present in this part the design and control of a quadcopter using pixhawk
autopilot
Slide 46:
Any project involving an autonomous vehicle will typically include the following parts:
Slide 47:
Hardware:} a set of sensors, controllers, and output devices that enable the drone to
perceive its environment and take appropriate action in light of the circumstances. It
essentially consists of every part that makes up the drone physically.
Slide 50:
h{Firmware:} the code that runs on the controller and ensures dependability, stability, and
features by exploiting sensor inputs, elaborating them through a processing unit, and
delivering outputs to various peripherals like as ESCs, servos, and so on. The firmware
enables the integration of hardware components.
{Software:} This is the controller's user interface, also known as the Ground Control Station
(GCS). Both PCs and mobile devices can run software. A GCS enables users to set-up,
configure, test, and fine-tune the vehicle configuration by simply and directly acting on the
firmware. Autonomous mission planning, execution, and post-mission analysis are possible
with advanced packages or add-ons. The operator often uses this to program the mission.
Slide 51 52 53:
The following slides represent some tests indoor and outdoor after completing the hardware
building and software implementation.